Nov 23 01:41:05 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023 Nov 23 01:41:05 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. Nov 23 01:41:05 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Nov 23 01:41:05 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 23 01:41:05 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 23 01:41:05 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 23 01:41:05 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 23 01:41:05 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 23 01:41:05 localhost kernel: signal: max sigframe size: 1776 Nov 23 01:41:05 localhost kernel: BIOS-provided physical RAM map: Nov 23 01:41:05 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 23 01:41:05 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 23 01:41:05 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 23 01:41:05 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable Nov 23 01:41:05 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved Nov 23 01:41:05 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 23 01:41:05 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 23 01:41:05 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable Nov 23 01:41:05 localhost kernel: NX (Execute Disable) protection: active Nov 23 01:41:05 localhost kernel: SMBIOS 2.8 present. Nov 23 01:41:05 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Nov 23 01:41:05 localhost kernel: Hypervisor detected: KVM Nov 23 01:41:05 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 23 01:41:05 localhost kernel: kvm-clock: using sched offset of 3161370078 cycles Nov 23 01:41:05 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 23 01:41:05 localhost kernel: tsc: Detected 2799.998 MHz processor Nov 23 01:41:05 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000 Nov 23 01:41:05 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 23 01:41:05 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000 Nov 23 01:41:05 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef] Nov 23 01:41:05 localhost kernel: Using GB pages for direct mapping Nov 23 01:41:05 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff] Nov 23 01:41:05 localhost kernel: ACPI: Early table checksum verification disabled Nov 23 01:41:05 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Nov 23 01:41:05 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 01:41:05 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 01:41:05 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 01:41:05 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040 Nov 23 01:41:05 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 01:41:05 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 01:41:05 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4] Nov 23 01:41:05 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570] Nov 23 01:41:05 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f] Nov 23 01:41:05 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694] Nov 23 01:41:05 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc] Nov 23 01:41:05 localhost kernel: No NUMA configuration found Nov 23 01:41:05 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff] Nov 23 01:41:05 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff] Nov 23 01:41:05 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB) Nov 23 01:41:05 localhost kernel: Zone ranges: Nov 23 01:41:05 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 23 01:41:05 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 23 01:41:05 localhost kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Nov 23 01:41:05 localhost kernel: Device empty Nov 23 01:41:05 localhost kernel: Movable zone start for each node Nov 23 01:41:05 localhost kernel: Early memory node ranges Nov 23 01:41:05 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 23 01:41:05 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffdafff] Nov 23 01:41:05 localhost kernel: node 0: [mem 0x0000000100000000-0x000000043fffffff] Nov 23 01:41:05 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff] Nov 23 01:41:05 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 23 01:41:05 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 23 01:41:05 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges Nov 23 01:41:05 localhost kernel: ACPI: PM-Timer IO Port: 0x608 Nov 23 01:41:05 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 23 01:41:05 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 23 01:41:05 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 23 01:41:05 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 23 01:41:05 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 23 01:41:05 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 23 01:41:05 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 23 01:41:05 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 23 01:41:05 localhost kernel: TSC deadline timer available Nov 23 01:41:05 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs Nov 23 01:41:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] Nov 23 01:41:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Nov 23 01:41:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] Nov 23 01:41:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] Nov 23 01:41:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff] Nov 23 01:41:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] Nov 23 01:41:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] Nov 23 01:41:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff] Nov 23 01:41:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Nov 23 01:41:05 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Nov 23 01:41:05 localhost kernel: Booting paravirtualized kernel on KVM Nov 23 01:41:05 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 23 01:41:05 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1 Nov 23 01:41:05 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144 Nov 23 01:41:05 localhost kernel: kvm-guest: PV spinlocks disabled, no host support Nov 23 01:41:05 localhost kernel: Fallback order for Node 0: 0 Nov 23 01:41:05 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 4128475 Nov 23 01:41:05 localhost kernel: Policy zone: Normal Nov 23 01:41:05 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Nov 23 01:41:05 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space. Nov 23 01:41:05 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 23 01:41:05 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 23 01:41:05 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 01:41:05 localhost kernel: software IO TLB: area num 8. Nov 23 01:41:05 localhost kernel: Memory: 2873456K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741260K reserved, 0K cma-reserved) Nov 23 01:41:05 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0 Nov 23 01:41:05 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 Nov 23 01:41:05 localhost kernel: ftrace: allocating 44803 entries in 176 pages Nov 23 01:41:05 localhost kernel: ftrace: allocated 176 pages with 3 groups Nov 23 01:41:05 localhost kernel: Dynamic Preempt: voluntary Nov 23 01:41:05 localhost kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 01:41:05 localhost kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8. Nov 23 01:41:05 localhost kernel: #011Trampoline variant of Tasks RCU enabled. Nov 23 01:41:05 localhost kernel: #011Rude variant of Tasks RCU enabled. Nov 23 01:41:05 localhost kernel: #011Tracing variant of Tasks RCU enabled. Nov 23 01:41:05 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 01:41:05 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 Nov 23 01:41:05 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16 Nov 23 01:41:05 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 01:41:05 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) Nov 23 01:41:05 localhost kernel: random: crng init done (trusting CPU's manufacturer) Nov 23 01:41:05 localhost kernel: Console: colour VGA+ 80x25 Nov 23 01:41:05 localhost kernel: printk: console [tty0] enabled Nov 23 01:41:05 localhost kernel: printk: console [ttyS0] enabled Nov 23 01:41:05 localhost kernel: ACPI: Core revision 20211217 Nov 23 01:41:05 localhost kernel: APIC: Switch to symmetric I/O mode setup Nov 23 01:41:05 localhost kernel: x2apic enabled Nov 23 01:41:05 localhost kernel: Switched APIC routing to physical x2apic. Nov 23 01:41:05 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 23 01:41:05 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Nov 23 01:41:05 localhost kernel: pid_max: default: 32768 minimum: 301 Nov 23 01:41:05 localhost kernel: LSM: Security Framework initializing Nov 23 01:41:05 localhost kernel: Yama: becoming mindful. Nov 23 01:41:05 localhost kernel: SELinux: Initializing. Nov 23 01:41:05 localhost kernel: LSM support for eBPF active Nov 23 01:41:05 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 01:41:05 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 01:41:05 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 23 01:41:05 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 23 01:41:05 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 23 01:41:05 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 23 01:41:05 localhost kernel: Spectre V2 : Mitigation: Retpolines Nov 23 01:41:05 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 23 01:41:05 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 23 01:41:05 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 23 01:41:05 localhost kernel: RETBleed: Mitigation: untrained return thunk Nov 23 01:41:05 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 23 01:41:05 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 23 01:41:05 localhost kernel: Freeing SMP alternatives memory: 36K Nov 23 01:41:05 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 23 01:41:05 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues. Nov 23 01:41:05 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Nov 23 01:41:05 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Nov 23 01:41:05 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Nov 23 01:41:05 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 23 01:41:05 localhost kernel: ... version: 0 Nov 23 01:41:05 localhost kernel: ... bit width: 48 Nov 23 01:41:05 localhost kernel: ... generic registers: 6 Nov 23 01:41:05 localhost kernel: ... value mask: 0000ffffffffffff Nov 23 01:41:05 localhost kernel: ... max period: 00007fffffffffff Nov 23 01:41:05 localhost kernel: ... fixed-purpose events: 0 Nov 23 01:41:05 localhost kernel: ... event mask: 000000000000003f Nov 23 01:41:05 localhost kernel: rcu: Hierarchical SRCU implementation. Nov 23 01:41:05 localhost kernel: rcu: #011Max phase no-delay instances is 400. Nov 23 01:41:05 localhost kernel: smp: Bringing up secondary CPUs ... Nov 23 01:41:05 localhost kernel: x86: Booting SMP configuration: Nov 23 01:41:05 localhost kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 Nov 23 01:41:05 localhost kernel: smp: Brought up 1 node, 8 CPUs Nov 23 01:41:05 localhost kernel: smpboot: Max logical packages: 8 Nov 23 01:41:05 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS) Nov 23 01:41:05 localhost kernel: node 0 deferred pages initialised in 24ms Nov 23 01:41:05 localhost kernel: devtmpfs: initialized Nov 23 01:41:05 localhost kernel: x86/mm: Memory block size: 128MB Nov 23 01:41:05 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 01:41:05 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear) Nov 23 01:41:05 localhost kernel: pinctrl core: initialized pinctrl subsystem Nov 23 01:41:05 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 01:41:05 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Nov 23 01:41:05 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 01:41:05 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 01:41:05 localhost kernel: audit: initializing netlink subsys (disabled) Nov 23 01:41:05 localhost kernel: audit: type=2000 audit(1763880063.151:1): state=initialized audit_enabled=0 res=1 Nov 23 01:41:05 localhost kernel: thermal_sys: Registered thermal governor 'fair_share' Nov 23 01:41:05 localhost kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 01:41:05 localhost kernel: thermal_sys: Registered thermal governor 'user_space' Nov 23 01:41:05 localhost kernel: cpuidle: using governor menu Nov 23 01:41:05 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB Nov 23 01:41:05 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 01:41:05 localhost kernel: PCI: Using configuration type 1 for base access Nov 23 01:41:05 localhost kernel: PCI: Using configuration type 1 for extended access Nov 23 01:41:05 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 23 01:41:05 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB Nov 23 01:41:05 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 01:41:05 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 01:41:05 localhost kernel: cryptd: max_cpu_qlen set to 1000 Nov 23 01:41:05 localhost kernel: ACPI: Added _OSI(Module Device) Nov 23 01:41:05 localhost kernel: ACPI: Added _OSI(Processor Device) Nov 23 01:41:05 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 23 01:41:05 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 01:41:05 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 23 01:41:05 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 23 01:41:05 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 23 01:41:05 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 01:41:05 localhost kernel: ACPI: Interpreter enabled Nov 23 01:41:05 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5) Nov 23 01:41:05 localhost kernel: ACPI: Using IOAPIC for interrupt routing Nov 23 01:41:05 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 23 01:41:05 localhost kernel: PCI: Using E820 reservations for host bridge windows Nov 23 01:41:05 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 23 01:41:05 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 23 01:41:05 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Nov 23 01:41:05 localhost kernel: acpiphp: Slot [3] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [4] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [5] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [6] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [7] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [8] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [9] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [10] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [11] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [12] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [13] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [14] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [15] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [16] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [17] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [18] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [19] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [20] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [21] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [22] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [23] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [24] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [25] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [26] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [27] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [28] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [29] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [30] registered Nov 23 01:41:05 localhost kernel: acpiphp: Slot [31] registered Nov 23 01:41:05 localhost kernel: PCI host bridge to bus 0000:00 Nov 23 01:41:05 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 23 01:41:05 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 23 01:41:05 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 23 01:41:05 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 23 01:41:05 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window] Nov 23 01:41:05 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 23 01:41:05 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 23 01:41:05 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 23 01:41:05 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 23 01:41:05 localhost kernel: pci 0000:00:01.1: reg 0x20: [io 0xc140-0xc14f] Nov 23 01:41:05 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 23 01:41:05 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 23 01:41:05 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 23 01:41:05 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 23 01:41:05 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 23 01:41:05 localhost kernel: pci 0000:00:01.2: reg 0x20: [io 0xc100-0xc11f] Nov 23 01:41:05 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 23 01:41:05 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 23 01:41:05 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 23 01:41:05 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 23 01:41:05 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 23 01:41:05 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 23 01:41:05 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Nov 23 01:41:05 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Nov 23 01:41:05 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 23 01:41:05 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 23 01:41:05 localhost kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Nov 23 01:41:05 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Nov 23 01:41:05 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 23 01:41:05 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Nov 23 01:41:05 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Nov 23 01:41:05 localhost kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Nov 23 01:41:05 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Nov 23 01:41:05 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 23 01:41:05 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Nov 23 01:41:05 localhost kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Nov 23 01:41:05 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 23 01:41:05 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Nov 23 01:41:05 localhost kernel: pci 0000:00:06.0: reg 0x10: [io 0xc120-0xc13f] Nov 23 01:41:05 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 23 01:41:05 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 23 01:41:05 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 23 01:41:05 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 23 01:41:05 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 23 01:41:05 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 23 01:41:05 localhost kernel: iommu: Default domain type: Translated Nov 23 01:41:05 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 23 01:41:05 localhost kernel: SCSI subsystem initialized Nov 23 01:41:05 localhost kernel: ACPI: bus type USB registered Nov 23 01:41:05 localhost kernel: usbcore: registered new interface driver usbfs Nov 23 01:41:05 localhost kernel: usbcore: registered new interface driver hub Nov 23 01:41:05 localhost kernel: usbcore: registered new device driver usb Nov 23 01:41:05 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Nov 23 01:41:05 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 23 01:41:05 localhost kernel: PTP clock support registered Nov 23 01:41:05 localhost kernel: EDAC MC: Ver: 3.0.0 Nov 23 01:41:05 localhost kernel: NetLabel: Initializing Nov 23 01:41:05 localhost kernel: NetLabel: domain hash size = 128 Nov 23 01:41:05 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Nov 23 01:41:05 localhost kernel: NetLabel: unlabeled traffic allowed by default Nov 23 01:41:05 localhost kernel: PCI: Using ACPI for IRQ routing Nov 23 01:41:05 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 23 01:41:05 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 23 01:41:05 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 23 01:41:05 localhost kernel: vgaarb: loaded Nov 23 01:41:05 localhost kernel: clocksource: Switched to clocksource kvm-clock Nov 23 01:41:05 localhost kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 01:41:05 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 01:41:05 localhost kernel: pnp: PnP ACPI init Nov 23 01:41:05 localhost kernel: pnp: PnP ACPI: found 5 devices Nov 23 01:41:05 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 23 01:41:05 localhost kernel: NET: Registered PF_INET protocol family Nov 23 01:41:05 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 01:41:05 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Nov 23 01:41:05 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 01:41:05 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 23 01:41:05 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Nov 23 01:41:05 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536) Nov 23 01:41:05 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear) Nov 23 01:41:05 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Nov 23 01:41:05 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Nov 23 01:41:05 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 01:41:05 localhost kernel: NET: Registered PF_XDP protocol family Nov 23 01:41:05 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 23 01:41:05 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 23 01:41:05 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 23 01:41:05 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Nov 23 01:41:05 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window] Nov 23 01:41:05 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 23 01:41:05 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 23 01:41:05 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 23 01:41:05 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 30159 usecs Nov 23 01:41:05 localhost kernel: PCI: CLS 0 bytes, default 64 Nov 23 01:41:05 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 23 01:41:05 localhost kernel: Trying to unpack rootfs image as initramfs... Nov 23 01:41:05 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB) Nov 23 01:41:05 localhost kernel: ACPI: bus type thunderbolt registered Nov 23 01:41:05 localhost kernel: Initialise system trusted keyrings Nov 23 01:41:05 localhost kernel: Key type blacklist registered Nov 23 01:41:05 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Nov 23 01:41:05 localhost kernel: zbud: loaded Nov 23 01:41:05 localhost kernel: integrity: Platform Keyring initialized Nov 23 01:41:05 localhost kernel: NET: Registered PF_ALG protocol family Nov 23 01:41:05 localhost kernel: xor: automatically using best checksumming function avx Nov 23 01:41:05 localhost kernel: Key type asymmetric registered Nov 23 01:41:05 localhost kernel: Asymmetric key parser 'x509' registered Nov 23 01:41:05 localhost kernel: Running certificate verification selftests Nov 23 01:41:05 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Nov 23 01:41:05 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) Nov 23 01:41:05 localhost kernel: io scheduler mq-deadline registered Nov 23 01:41:05 localhost kernel: io scheduler kyber registered Nov 23 01:41:05 localhost kernel: io scheduler bfq registered Nov 23 01:41:05 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Nov 23 01:41:05 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Nov 23 01:41:05 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Nov 23 01:41:05 localhost kernel: ACPI: button: Power Button [PWRF] Nov 23 01:41:05 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 23 01:41:05 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 23 01:41:05 localhost kernel: Freeing initrd memory: 74232K Nov 23 01:41:05 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 23 01:41:05 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 01:41:05 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 23 01:41:05 localhost kernel: Non-volatile memory driver v1.3 Nov 23 01:41:05 localhost kernel: rdac: device handler registered Nov 23 01:41:05 localhost kernel: hp_sw: device handler registered Nov 23 01:41:05 localhost kernel: emc: device handler registered Nov 23 01:41:05 localhost kernel: alua: device handler registered Nov 23 01:41:05 localhost kernel: libphy: Fixed MDIO Bus: probed Nov 23 01:41:05 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Nov 23 01:41:05 localhost kernel: ehci-pci: EHCI PCI platform driver Nov 23 01:41:05 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Nov 23 01:41:05 localhost kernel: ohci-pci: OHCI PCI platform driver Nov 23 01:41:05 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Nov 23 01:41:05 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 23 01:41:05 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 23 01:41:05 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 23 01:41:05 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100 Nov 23 01:41:05 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 Nov 23 01:41:05 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Nov 23 01:41:05 localhost kernel: usb usb1: Product: UHCI Host Controller Nov 23 01:41:05 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd Nov 23 01:41:05 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2 Nov 23 01:41:05 localhost kernel: hub 1-0:1.0: USB hub found Nov 23 01:41:05 localhost kernel: hub 1-0:1.0: 2 ports detected Nov 23 01:41:05 localhost kernel: usbcore: registered new interface driver usbserial_generic Nov 23 01:41:05 localhost kernel: usbserial: USB Serial support registered for generic Nov 23 01:41:05 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 23 01:41:05 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 23 01:41:05 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 23 01:41:05 localhost kernel: mousedev: PS/2 mouse device common for all mice Nov 23 01:41:05 localhost kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 23 01:41:05 localhost kernel: rtc_cmos 00:04: registered as rtc0 Nov 23 01:41:05 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 23 01:41:05 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-23T06:41:04 UTC (1763880064) Nov 23 01:41:05 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 23 01:41:05 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4 Nov 23 01:41:05 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 01:41:05 localhost kernel: usbcore: registered new interface driver usbhid Nov 23 01:41:05 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 Nov 23 01:41:05 localhost kernel: usbhid: USB HID core driver Nov 23 01:41:05 localhost kernel: drop_monitor: Initializing network drop monitor service Nov 23 01:41:05 localhost kernel: Initializing XFRM netlink socket Nov 23 01:41:05 localhost kernel: NET: Registered PF_INET6 protocol family Nov 23 01:41:05 localhost kernel: Segment Routing with IPv6 Nov 23 01:41:05 localhost kernel: NET: Registered PF_PACKET protocol family Nov 23 01:41:05 localhost kernel: mpls_gso: MPLS GSO support Nov 23 01:41:05 localhost kernel: IPI shorthand broadcast: enabled Nov 23 01:41:05 localhost kernel: AVX2 version of gcm_enc/dec engaged. Nov 23 01:41:05 localhost kernel: AES CTR mode by8 optimization enabled Nov 23 01:41:05 localhost kernel: sched_clock: Marking stable (1052718862, 175203212)->(1356306544, -128384470) Nov 23 01:41:05 localhost kernel: registered taskstats version 1 Nov 23 01:41:05 localhost kernel: Loading compiled-in X.509 certificates Nov 23 01:41:05 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Nov 23 01:41:05 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Nov 23 01:41:05 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Nov 23 01:41:05 localhost kernel: zswap: loaded using pool lzo/zbud Nov 23 01:41:05 localhost kernel: page_owner is disabled Nov 23 01:41:05 localhost kernel: Key type big_key registered Nov 23 01:41:05 localhost kernel: Key type encrypted registered Nov 23 01:41:05 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 01:41:05 localhost kernel: Loading compiled-in module X.509 certificates Nov 23 01:41:05 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Nov 23 01:41:05 localhost kernel: ima: Allocated hash algorithm: sha256 Nov 23 01:41:05 localhost kernel: ima: No architecture policies found Nov 23 01:41:05 localhost kernel: evm: Initialising EVM extended attributes: Nov 23 01:41:05 localhost kernel: evm: security.selinux Nov 23 01:41:05 localhost kernel: evm: security.SMACK64 (disabled) Nov 23 01:41:05 localhost kernel: evm: security.SMACK64EXEC (disabled) Nov 23 01:41:05 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled) Nov 23 01:41:05 localhost kernel: evm: security.SMACK64MMAP (disabled) Nov 23 01:41:05 localhost kernel: evm: security.apparmor (disabled) Nov 23 01:41:05 localhost kernel: evm: security.ima Nov 23 01:41:05 localhost kernel: evm: security.capability Nov 23 01:41:05 localhost kernel: evm: HMAC attrs: 0x1 Nov 23 01:41:05 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Nov 23 01:41:05 localhost kernel: Freeing unused decrypted memory: 2036K Nov 23 01:41:05 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K Nov 23 01:41:05 localhost kernel: Write protecting the kernel read-only data: 26624k Nov 23 01:41:05 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 23 01:41:05 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K Nov 23 01:41:05 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00 Nov 23 01:41:05 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10 Nov 23 01:41:05 localhost kernel: usb 1-1: Product: QEMU USB Tablet Nov 23 01:41:05 localhost kernel: usb 1-1: Manufacturer: QEMU Nov 23 01:41:05 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1 Nov 23 01:41:05 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5 Nov 23 01:41:05 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0 Nov 23 01:41:05 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Nov 23 01:41:05 localhost kernel: Run /init as init process Nov 23 01:41:05 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 23 01:41:05 localhost systemd[1]: Detected virtualization kvm. Nov 23 01:41:05 localhost systemd[1]: Detected architecture x86-64. Nov 23 01:41:05 localhost systemd[1]: Running in initrd. Nov 23 01:41:05 localhost systemd[1]: No hostname configured, using default hostname. Nov 23 01:41:05 localhost systemd[1]: Hostname set to . Nov 23 01:41:05 localhost systemd[1]: Initializing machine ID from VM UUID. Nov 23 01:41:05 localhost systemd[1]: Queued start job for default target Initrd Default Target. Nov 23 01:41:05 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Nov 23 01:41:05 localhost systemd[1]: Reached target Local Encrypted Volumes. Nov 23 01:41:05 localhost systemd[1]: Reached target Initrd /usr File System. Nov 23 01:41:05 localhost systemd[1]: Reached target Local File Systems. Nov 23 01:41:05 localhost systemd[1]: Reached target Path Units. Nov 23 01:41:05 localhost systemd[1]: Reached target Slice Units. Nov 23 01:41:05 localhost systemd[1]: Reached target Swaps. Nov 23 01:41:05 localhost systemd[1]: Reached target Timer Units. Nov 23 01:41:05 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Nov 23 01:41:05 localhost systemd[1]: Listening on Journal Socket (/dev/log). Nov 23 01:41:05 localhost systemd[1]: Listening on Journal Socket. Nov 23 01:41:05 localhost systemd[1]: Listening on udev Control Socket. Nov 23 01:41:05 localhost systemd[1]: Listening on udev Kernel Socket. Nov 23 01:41:05 localhost systemd[1]: Reached target Socket Units. Nov 23 01:41:05 localhost systemd[1]: Starting Create List of Static Device Nodes... Nov 23 01:41:05 localhost systemd[1]: Starting Journal Service... Nov 23 01:41:05 localhost systemd[1]: Starting Load Kernel Modules... Nov 23 01:41:05 localhost systemd[1]: Starting Create System Users... Nov 23 01:41:05 localhost systemd[1]: Starting Setup Virtual Console... Nov 23 01:41:05 localhost systemd[1]: Finished Create List of Static Device Nodes. Nov 23 01:41:05 localhost systemd[1]: Finished Load Kernel Modules. Nov 23 01:41:05 localhost systemd-journald[283]: Journal started Nov 23 01:41:05 localhost systemd-journald[283]: Runtime Journal (/run/log/journal/df69e9edec8d43d987108ff360287019) is 8.0M, max 314.7M, 306.7M free. Nov 23 01:41:05 localhost systemd-modules-load[284]: Module 'msr' is built in Nov 23 01:41:05 localhost systemd[1]: Started Journal Service. Nov 23 01:41:05 localhost systemd[1]: Finished Setup Virtual Console. Nov 23 01:41:05 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met. Nov 23 01:41:05 localhost systemd[1]: Starting dracut cmdline hook... Nov 23 01:41:05 localhost systemd[1]: Starting Apply Kernel Variables... Nov 23 01:41:05 localhost systemd-sysusers[285]: Creating group 'sgx' with GID 997. Nov 23 01:41:05 localhost systemd-sysusers[285]: Creating group 'users' with GID 100. Nov 23 01:41:05 localhost systemd-sysusers[285]: Creating group 'dbus' with GID 81. Nov 23 01:41:05 localhost systemd-sysusers[285]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. Nov 23 01:41:05 localhost systemd[1]: Finished Create System Users. Nov 23 01:41:05 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Nov 23 01:41:05 localhost dracut-cmdline[288]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9 Nov 23 01:41:05 localhost systemd[1]: Starting Create Volatile Files and Directories... Nov 23 01:41:05 localhost systemd[1]: Finished Apply Kernel Variables. Nov 23 01:41:05 localhost dracut-cmdline[288]: Using kernel command line parameters: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Nov 23 01:41:05 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Nov 23 01:41:05 localhost systemd[1]: Finished Create Volatile Files and Directories. Nov 23 01:41:05 localhost systemd[1]: Finished dracut cmdline hook. Nov 23 01:41:05 localhost systemd[1]: Starting dracut pre-udev hook... Nov 23 01:41:05 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 01:41:05 localhost kernel: device-mapper: uevent: version 1.0.3 Nov 23 01:41:05 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Nov 23 01:41:05 localhost kernel: RPC: Registered named UNIX socket transport module. Nov 23 01:41:05 localhost kernel: RPC: Registered udp transport module. Nov 23 01:41:05 localhost kernel: RPC: Registered tcp transport module. Nov 23 01:41:05 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Nov 23 01:41:05 localhost rpc.statd[408]: Version 2.5.4 starting Nov 23 01:41:05 localhost rpc.statd[408]: Initializing NSM state Nov 23 01:41:05 localhost rpc.idmapd[413]: Setting log level to 0 Nov 23 01:41:05 localhost systemd[1]: Finished dracut pre-udev hook. Nov 23 01:41:05 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Nov 23 01:41:05 localhost systemd-udevd[426]: Using default interface naming scheme 'rhel-9.0'. Nov 23 01:41:05 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Nov 23 01:41:05 localhost systemd[1]: Starting dracut pre-trigger hook... Nov 23 01:41:05 localhost systemd[1]: Finished dracut pre-trigger hook. Nov 23 01:41:05 localhost systemd[1]: Starting Coldplug All udev Devices... Nov 23 01:41:05 localhost systemd[1]: Finished Coldplug All udev Devices. Nov 23 01:41:05 localhost systemd[1]: Reached target System Initialization. Nov 23 01:41:05 localhost systemd[1]: Reached target Basic System. Nov 23 01:41:05 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Nov 23 01:41:05 localhost systemd[1]: Reached target Network. Nov 23 01:41:05 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Nov 23 01:41:05 localhost systemd[1]: Starting dracut initqueue hook... Nov 23 01:41:05 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB) Nov 23 01:41:05 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 23 01:41:05 localhost kernel: GPT:20971519 != 838860799 Nov 23 01:41:05 localhost kernel: GPT:Alternate GPT header not at the end of the disk. Nov 23 01:41:05 localhost kernel: GPT:20971519 != 838860799 Nov 23 01:41:05 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Nov 23 01:41:05 localhost kernel: vda: vda1 vda2 vda3 vda4 Nov 23 01:41:05 localhost systemd-udevd[458]: Network interface NamePolicy= disabled on kernel command line. Nov 23 01:41:05 localhost kernel: scsi host0: ata_piix Nov 23 01:41:05 localhost kernel: scsi host1: ata_piix Nov 23 01:41:05 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 Nov 23 01:41:05 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 Nov 23 01:41:05 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Nov 23 01:41:05 localhost systemd[1]: Reached target Initrd Root Device. Nov 23 01:41:05 localhost kernel: ata1: found unknown device (class 0) Nov 23 01:41:05 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 23 01:41:05 localhost kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 23 01:41:05 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5 Nov 23 01:41:05 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 23 01:41:05 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 23 01:41:06 localhost systemd[1]: Finished dracut initqueue hook. Nov 23 01:41:06 localhost systemd[1]: Reached target Preparation for Remote File Systems. Nov 23 01:41:06 localhost systemd[1]: Reached target Remote Encrypted Volumes. Nov 23 01:41:06 localhost systemd[1]: Reached target Remote File Systems. Nov 23 01:41:06 localhost systemd[1]: Starting dracut pre-mount hook... Nov 23 01:41:06 localhost systemd[1]: Finished dracut pre-mount hook. Nov 23 01:41:06 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a... Nov 23 01:41:06 localhost systemd-fsck[511]: /usr/sbin/fsck.xfs: XFS file system. Nov 23 01:41:06 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Nov 23 01:41:06 localhost systemd[1]: Mounting /sysroot... Nov 23 01:41:06 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled Nov 23 01:41:06 localhost kernel: XFS (vda4): Mounting V5 Filesystem Nov 23 01:41:06 localhost kernel: XFS (vda4): Ending clean mount Nov 23 01:41:06 localhost systemd[1]: Mounted /sysroot. Nov 23 01:41:06 localhost systemd[1]: Reached target Initrd Root File System. Nov 23 01:41:06 localhost systemd[1]: Starting Mountpoints Configured in the Real Root... Nov 23 01:41:06 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Finished Mountpoints Configured in the Real Root. Nov 23 01:41:06 localhost systemd[1]: Reached target Initrd File Systems. Nov 23 01:41:06 localhost systemd[1]: Reached target Initrd Default Target. Nov 23 01:41:06 localhost systemd[1]: Starting dracut mount hook... Nov 23 01:41:06 localhost systemd[1]: Finished dracut mount hook. Nov 23 01:41:06 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Nov 23 01:41:06 localhost rpc.idmapd[413]: exiting on signal 15 Nov 23 01:41:06 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook. Nov 23 01:41:06 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Nov 23 01:41:06 localhost systemd[1]: Stopped target Network. Nov 23 01:41:06 localhost systemd[1]: Stopped target Remote Encrypted Volumes. Nov 23 01:41:06 localhost systemd[1]: Stopped target Timer Units. Nov 23 01:41:06 localhost systemd[1]: dbus.socket: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Closed D-Bus System Message Bus Socket. Nov 23 01:41:06 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Nov 23 01:41:06 localhost systemd[1]: Stopped target Initrd Default Target. Nov 23 01:41:06 localhost systemd[1]: Stopped target Basic System. Nov 23 01:41:06 localhost systemd[1]: Stopped target Initrd Root Device. Nov 23 01:41:06 localhost systemd[1]: Stopped target Initrd /usr File System. Nov 23 01:41:06 localhost systemd[1]: Stopped target Path Units. Nov 23 01:41:06 localhost systemd[1]: Stopped target Remote File Systems. Nov 23 01:41:06 localhost systemd[1]: Stopped target Preparation for Remote File Systems. Nov 23 01:41:06 localhost systemd[1]: Stopped target Slice Units. Nov 23 01:41:06 localhost systemd[1]: Stopped target Socket Units. Nov 23 01:41:06 localhost systemd[1]: Stopped target System Initialization. Nov 23 01:41:06 localhost systemd[1]: Stopped target Local File Systems. Nov 23 01:41:06 localhost systemd[1]: Stopped target Swaps. Nov 23 01:41:06 localhost systemd[1]: dracut-mount.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped dracut mount hook. Nov 23 01:41:06 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped dracut pre-mount hook. Nov 23 01:41:06 localhost systemd[1]: Stopped target Local Encrypted Volumes. Nov 23 01:41:06 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Nov 23 01:41:06 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped dracut initqueue hook. Nov 23 01:41:06 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped Apply Kernel Variables. Nov 23 01:41:06 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped Load Kernel Modules. Nov 23 01:41:06 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped Create Volatile Files and Directories. Nov 23 01:41:06 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped Coldplug All udev Devices. Nov 23 01:41:06 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped dracut pre-trigger hook. Nov 23 01:41:06 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Nov 23 01:41:06 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped Setup Virtual Console. Nov 23 01:41:06 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons. Nov 23 01:41:06 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Nov 23 01:41:06 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Closed udev Control Socket. Nov 23 01:41:06 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Closed udev Kernel Socket. Nov 23 01:41:06 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped dracut pre-udev hook. Nov 23 01:41:06 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped dracut cmdline hook. Nov 23 01:41:06 localhost systemd[1]: Starting Cleanup udev Database... Nov 23 01:41:06 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Nov 23 01:41:06 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped Create List of Static Device Nodes. Nov 23 01:41:06 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Stopped Create System Users. Nov 23 01:41:06 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 01:41:06 localhost systemd[1]: Finished Cleanup udev Database. Nov 23 01:41:06 localhost systemd[1]: Reached target Switch Root. Nov 23 01:41:06 localhost systemd[1]: Starting Switch Root... Nov 23 01:41:06 localhost systemd[1]: Switching root. Nov 23 01:41:06 localhost systemd-journald[283]: Journal stopped Nov 23 01:41:07 localhost systemd-journald[283]: Received SIGTERM from PID 1 (systemd). Nov 23 01:41:07 localhost kernel: audit: type=1404 audit(1763880066.882:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Nov 23 01:41:07 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 01:41:07 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 01:41:07 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 01:41:07 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 01:41:07 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 01:41:07 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 01:41:07 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 01:41:07 localhost kernel: audit: type=1403 audit(1763880067.016:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 01:41:07 localhost systemd[1]: Successfully loaded SELinux policy in 139.111ms. Nov 23 01:41:07 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 33.851ms. Nov 23 01:41:07 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 23 01:41:07 localhost systemd[1]: Detected virtualization kvm. Nov 23 01:41:07 localhost systemd[1]: Detected architecture x86-64. Nov 23 01:41:07 localhost systemd-rc-local-generator[581]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 01:41:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 01:41:07 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 01:41:07 localhost systemd[1]: Stopped Switch Root. Nov 23 01:41:07 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 01:41:07 localhost systemd[1]: Created slice Slice /system/getty. Nov 23 01:41:07 localhost systemd[1]: Created slice Slice /system/modprobe. Nov 23 01:41:07 localhost systemd[1]: Created slice Slice /system/serial-getty. Nov 23 01:41:07 localhost systemd[1]: Created slice Slice /system/sshd-keygen. Nov 23 01:41:07 localhost systemd[1]: Created slice Slice /system/systemd-fsck. Nov 23 01:41:07 localhost systemd[1]: Created slice User and Session Slice. Nov 23 01:41:07 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Nov 23 01:41:07 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. Nov 23 01:41:07 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Nov 23 01:41:07 localhost systemd[1]: Reached target Local Encrypted Volumes. Nov 23 01:41:07 localhost systemd[1]: Stopped target Switch Root. Nov 23 01:41:07 localhost systemd[1]: Stopped target Initrd File Systems. Nov 23 01:41:07 localhost systemd[1]: Stopped target Initrd Root File System. Nov 23 01:41:07 localhost systemd[1]: Reached target Local Integrity Protected Volumes. Nov 23 01:41:07 localhost systemd[1]: Reached target Path Units. Nov 23 01:41:07 localhost systemd[1]: Reached target rpc_pipefs.target. Nov 23 01:41:07 localhost systemd[1]: Reached target Slice Units. Nov 23 01:41:07 localhost systemd[1]: Reached target Swaps. Nov 23 01:41:07 localhost systemd[1]: Reached target Local Verity Protected Volumes. Nov 23 01:41:07 localhost systemd[1]: Listening on RPCbind Server Activation Socket. Nov 23 01:41:07 localhost systemd[1]: Reached target RPC Port Mapper. Nov 23 01:41:07 localhost systemd[1]: Listening on Process Core Dump Socket. Nov 23 01:41:07 localhost systemd[1]: Listening on initctl Compatibility Named Pipe. Nov 23 01:41:07 localhost systemd[1]: Listening on udev Control Socket. Nov 23 01:41:07 localhost systemd[1]: Listening on udev Kernel Socket. Nov 23 01:41:07 localhost systemd[1]: Mounting Huge Pages File System... Nov 23 01:41:07 localhost systemd[1]: Mounting POSIX Message Queue File System... Nov 23 01:41:07 localhost systemd[1]: Mounting Kernel Debug File System... Nov 23 01:41:07 localhost systemd[1]: Mounting Kernel Trace File System... Nov 23 01:41:07 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Nov 23 01:41:07 localhost systemd[1]: Starting Create List of Static Device Nodes... Nov 23 01:41:07 localhost systemd[1]: Starting Load Kernel Module configfs... Nov 23 01:41:07 localhost systemd[1]: Starting Load Kernel Module drm... Nov 23 01:41:07 localhost systemd[1]: Starting Load Kernel Module fuse... Nov 23 01:41:07 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... Nov 23 01:41:07 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 01:41:07 localhost systemd[1]: Stopped File System Check on Root Device. Nov 23 01:41:07 localhost systemd[1]: Stopped Journal Service. Nov 23 01:41:07 localhost systemd[1]: Starting Journal Service... Nov 23 01:41:07 localhost systemd[1]: Starting Load Kernel Modules... Nov 23 01:41:07 localhost systemd[1]: Starting Generate network units from Kernel command line... Nov 23 01:41:07 localhost kernel: fuse: init (API version 7.36) Nov 23 01:41:07 localhost systemd[1]: Starting Remount Root and Kernel File Systems... Nov 23 01:41:07 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 01:41:07 localhost systemd[1]: Starting Coldplug All udev Devices... Nov 23 01:41:07 localhost systemd-journald[617]: Journal started Nov 23 01:41:07 localhost systemd-journald[617]: Runtime Journal (/run/log/journal/6e0090cd4cf296f54418e234b90f721c) is 8.0M, max 314.7M, 306.7M free. Nov 23 01:41:07 localhost systemd[1]: Queued start job for default target Multi-User System. Nov 23 01:41:07 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 01:41:07 localhost systemd-modules-load[618]: Module 'msr' is built in Nov 23 01:41:07 localhost kernel: ACPI: bus type drm_connector registered Nov 23 01:41:07 localhost systemd[1]: Started Journal Service. Nov 23 01:41:07 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff) Nov 23 01:41:07 localhost systemd[1]: Mounted Huge Pages File System. Nov 23 01:41:07 localhost systemd[1]: Mounted POSIX Message Queue File System. Nov 23 01:41:07 localhost systemd[1]: Mounted Kernel Debug File System. Nov 23 01:41:07 localhost systemd[1]: Mounted Kernel Trace File System. Nov 23 01:41:07 localhost systemd[1]: Finished Create List of Static Device Nodes. Nov 23 01:41:07 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 01:41:07 localhost systemd[1]: Finished Load Kernel Module configfs. Nov 23 01:41:07 localhost systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 01:41:07 localhost systemd[1]: Finished Load Kernel Module drm. Nov 23 01:41:07 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 01:41:07 localhost systemd[1]: Finished Load Kernel Module fuse. Nov 23 01:41:07 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network. Nov 23 01:41:07 localhost systemd[1]: Finished Load Kernel Modules. Nov 23 01:41:07 localhost systemd[1]: Finished Generate network units from Kernel command line. Nov 23 01:41:07 localhost systemd[1]: Finished Remount Root and Kernel File Systems. Nov 23 01:41:07 localhost systemd[1]: Mounting FUSE Control File System... Nov 23 01:41:07 localhost systemd[1]: Mounting Kernel Configuration File System... Nov 23 01:41:07 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes). Nov 23 01:41:07 localhost systemd[1]: Starting Rebuild Hardware Database... Nov 23 01:41:07 localhost systemd[1]: Starting Flush Journal to Persistent Storage... Nov 23 01:41:07 localhost systemd[1]: Starting Load/Save Random Seed... Nov 23 01:41:07 localhost systemd[1]: Starting Apply Kernel Variables... Nov 23 01:41:07 localhost systemd[1]: Starting Create System Users... Nov 23 01:41:07 localhost systemd-journald[617]: Runtime Journal (/run/log/journal/6e0090cd4cf296f54418e234b90f721c) is 8.0M, max 314.7M, 306.7M free. Nov 23 01:41:07 localhost systemd-journald[617]: Received client request to flush runtime journal. Nov 23 01:41:07 localhost systemd[1]: Mounted FUSE Control File System. Nov 23 01:41:07 localhost systemd[1]: Mounted Kernel Configuration File System. Nov 23 01:41:07 localhost systemd[1]: Finished Flush Journal to Persistent Storage. Nov 23 01:41:07 localhost systemd[1]: Finished Apply Kernel Variables. Nov 23 01:41:07 localhost systemd[1]: Finished Load/Save Random Seed. Nov 23 01:41:07 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes). Nov 23 01:41:07 localhost systemd-sysusers[631]: Creating group 'sgx' with GID 989. Nov 23 01:41:07 localhost systemd-sysusers[631]: Creating group 'systemd-oom' with GID 988. Nov 23 01:41:07 localhost systemd-sysusers[631]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988. Nov 23 01:41:07 localhost systemd[1]: Finished Coldplug All udev Devices. Nov 23 01:41:07 localhost systemd[1]: Finished Create System Users. Nov 23 01:41:07 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Nov 23 01:41:07 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Nov 23 01:41:07 localhost systemd[1]: Reached target Preparation for Local File Systems. Nov 23 01:41:07 localhost systemd[1]: Set up automount EFI System Partition Automount. Nov 23 01:41:08 localhost systemd[1]: Finished Rebuild Hardware Database. Nov 23 01:41:08 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Nov 23 01:41:08 localhost systemd-udevd[635]: Using default interface naming scheme 'rhel-9.0'. Nov 23 01:41:08 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Nov 23 01:41:08 localhost systemd[1]: Starting Load Kernel Module configfs... Nov 23 01:41:08 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped. Nov 23 01:41:08 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 01:41:08 localhost systemd[1]: Finished Load Kernel Module configfs. Nov 23 01:41:08 localhost systemd-udevd[639]: Network interface NamePolicy= disabled on kernel command line. Nov 23 01:41:08 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped. Nov 23 01:41:08 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7... Nov 23 01:41:08 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped. Nov 23 01:41:08 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6 Nov 23 01:41:08 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 23 01:41:08 localhost systemd-fsck[679]: fsck.fat 4.2 (2021-01-31) Nov 23 01:41:08 localhost systemd-fsck[679]: /dev/vda2: 12 files, 1782/51145 clusters Nov 23 01:41:08 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7. Nov 23 01:41:08 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 23 01:41:08 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 23 01:41:08 localhost kernel: Console: switching to colour dummy device 80x25 Nov 23 01:41:08 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 23 01:41:08 localhost kernel: [drm] features: -context_init Nov 23 01:41:08 localhost kernel: [drm] number of scanouts: 1 Nov 23 01:41:08 localhost kernel: [drm] number of cap sets: 0 Nov 23 01:41:08 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0 Nov 23 01:41:08 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called Nov 23 01:41:08 localhost kernel: Console: switching to colour frame buffer device 128x48 Nov 23 01:41:08 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 23 01:41:08 localhost kernel: SVM: TSC scaling supported Nov 23 01:41:08 localhost kernel: kvm: Nested Virtualization enabled Nov 23 01:41:08 localhost kernel: SVM: kvm: Nested Paging enabled Nov 23 01:41:08 localhost kernel: SVM: LBR virtualization supported Nov 23 01:41:08 localhost systemd[1]: Mounting /boot... Nov 23 01:41:08 localhost kernel: XFS (vda3): Mounting V5 Filesystem Nov 23 01:41:08 localhost kernel: XFS (vda3): Ending clean mount Nov 23 01:41:08 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff) Nov 23 01:41:08 localhost systemd[1]: Mounted /boot. Nov 23 01:41:08 localhost systemd[1]: Mounting /boot/efi... Nov 23 01:41:08 localhost systemd[1]: Mounted /boot/efi. Nov 23 01:41:08 localhost systemd[1]: Reached target Local File Systems. Nov 23 01:41:08 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... Nov 23 01:41:08 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux). Nov 23 01:41:08 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 01:41:08 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 23 01:41:08 localhost systemd[1]: Starting Automatic Boot Loader Update... Nov 23 01:41:08 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id). Nov 23 01:41:08 localhost systemd[1]: Starting Create Volatile Files and Directories... Nov 23 01:41:08 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 717 (bootctl) Nov 23 01:41:08 localhost systemd[1]: Starting File System Check on /dev/vda2... Nov 23 01:41:08 localhost systemd[1]: Finished File System Check on /dev/vda2. Nov 23 01:41:08 localhost systemd[1]: Mounting EFI System Partition Automount... Nov 23 01:41:08 localhost systemd[1]: Mounted EFI System Partition Automount. Nov 23 01:41:08 localhost systemd[1]: Finished Automatic Boot Loader Update. Nov 23 01:41:08 localhost systemd[1]: Finished Create Volatile Files and Directories. Nov 23 01:41:08 localhost systemd[1]: Starting Security Auditing Service... Nov 23 01:41:08 localhost systemd[1]: Starting RPC Bind... Nov 23 01:41:08 localhost systemd[1]: Starting Rebuild Journal Catalog... Nov 23 01:41:08 localhost systemd[1]: Finished Rebuild Journal Catalog. Nov 23 01:41:08 localhost auditd[726]: audit dispatcher initialized with q_depth=1200 and 1 active plugins Nov 23 01:41:08 localhost auditd[726]: Init complete, auditd 3.0.7 listening for events (startup state enable) Nov 23 01:41:08 localhost systemd[1]: Started RPC Bind. Nov 23 01:41:09 localhost augenrules[731]: /sbin/augenrules: No change Nov 23 01:41:09 localhost augenrules[741]: No rules Nov 23 01:41:09 localhost augenrules[741]: enabled 1 Nov 23 01:41:09 localhost augenrules[741]: failure 1 Nov 23 01:41:09 localhost augenrules[741]: pid 726 Nov 23 01:41:09 localhost augenrules[741]: rate_limit 0 Nov 23 01:41:09 localhost augenrules[741]: backlog_limit 8192 Nov 23 01:41:09 localhost augenrules[741]: lost 0 Nov 23 01:41:09 localhost augenrules[741]: backlog 0 Nov 23 01:41:09 localhost augenrules[741]: backlog_wait_time 60000 Nov 23 01:41:09 localhost augenrules[741]: backlog_wait_time_actual 0 Nov 23 01:41:09 localhost augenrules[741]: enabled 1 Nov 23 01:41:09 localhost augenrules[741]: failure 1 Nov 23 01:41:09 localhost augenrules[741]: pid 726 Nov 23 01:41:09 localhost augenrules[741]: rate_limit 0 Nov 23 01:41:09 localhost augenrules[741]: backlog_limit 8192 Nov 23 01:41:09 localhost augenrules[741]: lost 0 Nov 23 01:41:09 localhost augenrules[741]: backlog 0 Nov 23 01:41:09 localhost augenrules[741]: backlog_wait_time 60000 Nov 23 01:41:09 localhost augenrules[741]: backlog_wait_time_actual 0 Nov 23 01:41:09 localhost augenrules[741]: enabled 1 Nov 23 01:41:09 localhost augenrules[741]: failure 1 Nov 23 01:41:09 localhost augenrules[741]: pid 726 Nov 23 01:41:09 localhost augenrules[741]: rate_limit 0 Nov 23 01:41:09 localhost augenrules[741]: backlog_limit 8192 Nov 23 01:41:09 localhost augenrules[741]: lost 0 Nov 23 01:41:09 localhost augenrules[741]: backlog 0 Nov 23 01:41:09 localhost augenrules[741]: backlog_wait_time 60000 Nov 23 01:41:09 localhost augenrules[741]: backlog_wait_time_actual 0 Nov 23 01:41:09 localhost systemd[1]: Started Security Auditing Service. Nov 23 01:41:09 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP... Nov 23 01:41:09 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP. Nov 23 01:41:09 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache. Nov 23 01:41:09 localhost systemd[1]: Starting Update is Completed... Nov 23 01:41:09 localhost systemd[1]: Finished Update is Completed. Nov 23 01:41:09 localhost systemd[1]: Reached target System Initialization. Nov 23 01:41:09 localhost systemd[1]: Started dnf makecache --timer. Nov 23 01:41:09 localhost systemd[1]: Started Daily rotation of log files. Nov 23 01:41:09 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. Nov 23 01:41:09 localhost systemd[1]: Reached target Timer Units. Nov 23 01:41:09 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Nov 23 01:41:09 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket. Nov 23 01:41:09 localhost systemd[1]: Reached target Socket Units. Nov 23 01:41:09 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)... Nov 23 01:41:09 localhost systemd[1]: Starting D-Bus System Message Bus... Nov 23 01:41:09 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 23 01:41:09 localhost systemd[1]: Started D-Bus System Message Bus. Nov 23 01:41:09 localhost systemd[1]: Reached target Basic System. Nov 23 01:41:09 localhost journal[751]: Ready Nov 23 01:41:09 localhost systemd[1]: Starting NTP client/server... Nov 23 01:41:09 localhost systemd[1]: Starting Restore /run/initramfs on shutdown... Nov 23 01:41:09 localhost systemd[1]: Started irqbalance daemon. Nov 23 01:41:09 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload). Nov 23 01:41:09 localhost systemd[1]: Starting System Logging Service... Nov 23 01:41:09 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 23 01:41:09 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 23 01:41:09 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 23 01:41:09 localhost systemd[1]: Reached target sshd-keygen.target. Nov 23 01:41:09 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met. Nov 23 01:41:09 localhost systemd[1]: Reached target User and Group Name Lookups. Nov 23 01:41:09 localhost systemd[1]: Starting User Login Management... Nov 23 01:41:09 localhost systemd[1]: Finished Restore /run/initramfs on shutdown. Nov 23 01:41:09 localhost rsyslogd[759]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="759" x-info="https://www.rsyslog.com"] start Nov 23 01:41:09 localhost rsyslogd[759]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ] Nov 23 01:41:09 localhost systemd[1]: Started System Logging Service. Nov 23 01:41:09 localhost chronyd[766]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Nov 23 01:41:09 localhost chronyd[766]: Using right/UTC timezone to obtain leap second data Nov 23 01:41:09 localhost chronyd[766]: Loaded seccomp filter (level 2) Nov 23 01:41:09 localhost systemd[1]: Started NTP client/server. Nov 23 01:41:09 localhost systemd-logind[760]: New seat seat0. Nov 23 01:41:09 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button) Nov 23 01:41:09 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Nov 23 01:41:09 localhost systemd[1]: Started User Login Management. Nov 23 01:41:09 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 01:41:09 localhost cloud-init[770]: Cloud-init v. 22.1-9.el9 running 'init-local' at Sun, 23 Nov 2025 06:41:09 +0000. Up 6.34 seconds. Nov 23 01:41:10 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpl0lzgvkb.mount: Deactivated successfully. Nov 23 01:41:10 localhost systemd[1]: Starting Hostname Service... Nov 23 01:41:10 localhost systemd[1]: Started Hostname Service. Nov 23 01:41:10 localhost systemd-hostnamed[784]: Hostname set to (static) Nov 23 01:41:10 localhost systemd[1]: Finished Initial cloud-init job (pre-networking). Nov 23 01:41:10 localhost systemd[1]: Reached target Preparation for Network. Nov 23 01:41:10 localhost systemd[1]: Starting Network Manager... Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4179] NetworkManager (version 1.42.2-1.el9) is starting... (boot:a8bcb6c7-7b47-4821-8efb-ebcb05504f11) Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4186] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4230] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Nov 23 01:41:10 localhost systemd[1]: Started Network Manager. Nov 23 01:41:10 localhost systemd[1]: Reached target Network. Nov 23 01:41:10 localhost systemd[1]: Starting Network Manager Wait Online... Nov 23 01:41:10 localhost systemd[1]: Starting GSSAPI Proxy Daemon... Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4341] manager[0x563776035020]: monitoring kernel firmware directory '/lib/firmware'. Nov 23 01:41:10 localhost systemd[1]: Starting Enable periodic update of entitlement certificates.... Nov 23 01:41:10 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Nov 23 01:41:10 localhost systemd[1]: Started Enable periodic update of entitlement certificates.. Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4444] hostname: hostname: using hostnamed Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4445] hostname: static hostname changed from (none) to "np0005532584.novalocal" Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4463] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Nov 23 01:41:10 localhost systemd[1]: Started GSSAPI Proxy Daemon. Nov 23 01:41:10 localhost systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Nov 23 01:41:10 localhost systemd[1]: Reached target NFS client services. Nov 23 01:41:10 localhost systemd[1]: Reached target Preparation for Remote File Systems. Nov 23 01:41:10 localhost systemd[1]: Reached target Remote File Systems. Nov 23 01:41:10 localhost systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4619] manager[0x563776035020]: rfkill: Wi-Fi hardware radio set enabled Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4620] manager[0x563776035020]: rfkill: WWAN hardware radio set enabled Nov 23 01:41:10 localhost systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch. Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4720] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4724] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4737] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4738] manager: Networking is enabled by state file Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4781] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4781] settings: Loaded settings plugin: keyfile (internal) Nov 23 01:41:10 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4832] dhcp: init: Using DHCP client 'internal' Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4836] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4856] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4864] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4875] device (lo): Activation: starting connection 'lo' (8cb417fe-36ff-4a47-ac39-711ef6c42393) Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4889] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4893] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Nov 23 01:41:10 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4944] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4947] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4949] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4952] device (eth0): carrier: link connected Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4956] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4963] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4972] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4977] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4979] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4983] manager: NetworkManager state is now CONNECTING Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.4985] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.5039] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.5044] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.5051] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.5063] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.5069] device (lo): Activation: successful, device activated. Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.5089] dhcp4 (eth0): state changed new lease, address=38.102.83.248 Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.5097] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.5130] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.5153] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.5156] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.5161] manager: NetworkManager state is now CONNECTED_SITE Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.5164] device (eth0): Activation: successful, device activated. Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.5170] manager: NetworkManager state is now CONNECTED_GLOBAL Nov 23 01:41:10 localhost NetworkManager[789]: [1763880070.5174] manager: startup complete Nov 23 01:41:10 localhost systemd[1]: Finished Network Manager Wait Online. Nov 23 01:41:10 localhost systemd[1]: Starting Initial cloud-init job (metadata service crawler)... Nov 23 01:41:10 localhost cloud-init[919]: Cloud-init v. 22.1-9.el9 running 'init' at Sun, 23 Nov 2025 06:41:10 +0000. Up 7.24 seconds. Nov 23 01:41:10 localhost cloud-init[919]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++ Nov 23 01:41:10 localhost cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Nov 23 01:41:10 localhost cloud-init[919]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Nov 23 01:41:10 localhost cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Nov 23 01:41:10 localhost cloud-init[919]: ci-info: | eth0 | True | 38.102.83.248 | 255.255.255.0 | global | fa:16:3e:15:63:3a | Nov 23 01:41:10 localhost cloud-init[919]: ci-info: | eth0 | True | fe80::f816:3eff:fe15:633a/64 | . | link | fa:16:3e:15:63:3a | Nov 23 01:41:10 localhost cloud-init[919]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | Nov 23 01:41:10 localhost cloud-init[919]: ci-info: | lo | True | ::1/128 | . | host | . | Nov 23 01:41:10 localhost cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Nov 23 01:41:10 localhost cloud-init[919]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++ Nov 23 01:41:10 localhost cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Nov 23 01:41:10 localhost cloud-init[919]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Nov 23 01:41:10 localhost cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Nov 23 01:41:10 localhost cloud-init[919]: ci-info: | 0 | 0.0.0.0 | 38.102.83.1 | 0.0.0.0 | eth0 | UG | Nov 23 01:41:10 localhost cloud-init[919]: ci-info: | 1 | 38.102.83.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U | Nov 23 01:41:10 localhost cloud-init[919]: ci-info: | 2 | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 | eth0 | UGH | Nov 23 01:41:10 localhost cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Nov 23 01:41:10 localhost cloud-init[919]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ Nov 23 01:41:10 localhost cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+ Nov 23 01:41:10 localhost cloud-init[919]: ci-info: | Route | Destination | Gateway | Interface | Flags | Nov 23 01:41:10 localhost cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+ Nov 23 01:41:10 localhost cloud-init[919]: ci-info: | 1 | fe80::/64 | :: | eth0 | U | Nov 23 01:41:10 localhost cloud-init[919]: ci-info: | 3 | multicast | :: | eth0 | U | Nov 23 01:41:10 localhost cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+ Nov 23 01:41:10 localhost systemd[1]: Starting Authorization Manager... Nov 23 01:41:10 localhost systemd[1]: Started Dynamic System Tuning Daemon. Nov 23 01:41:10 localhost polkitd[1036]: Started polkitd version 0.117 Nov 23 01:41:11 localhost systemd[1]: Started Authorization Manager. Nov 23 01:41:14 localhost cloud-init[919]: Generating public/private rsa key pair. Nov 23 01:41:14 localhost cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key Nov 23 01:41:14 localhost cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub Nov 23 01:41:14 localhost cloud-init[919]: The key fingerprint is: Nov 23 01:41:14 localhost cloud-init[919]: SHA256:JphQOeDgHcYf6jX26pg5Q0qizKKv+dN0wgnWsQhB99U root@np0005532584.novalocal Nov 23 01:41:14 localhost cloud-init[919]: The key's randomart image is: Nov 23 01:41:14 localhost cloud-init[919]: +---[RSA 3072]----+ Nov 23 01:41:14 localhost cloud-init[919]: |+oo=.. .. | Nov 23 01:41:14 localhost cloud-init[919]: |oo+o*.. E | Nov 23 01:41:14 localhost cloud-init[919]: | oo+o=. | Nov 23 01:41:14 localhost cloud-init[919]: | +ooB | Nov 23 01:41:14 localhost cloud-init[919]: | ..o=.+ S | Nov 23 01:41:14 localhost cloud-init[919]: |.. o= .+ | Nov 23 01:41:14 localhost cloud-init[919]: |* oo o. | Nov 23 01:41:14 localhost cloud-init[919]: |o=.o=. | Nov 23 01:41:14 localhost cloud-init[919]: |*+o=+. | Nov 23 01:41:14 localhost cloud-init[919]: +----[SHA256]-----+ Nov 23 01:41:14 localhost cloud-init[919]: Generating public/private ecdsa key pair. Nov 23 01:41:14 localhost cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key Nov 23 01:41:14 localhost cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub Nov 23 01:41:14 localhost cloud-init[919]: The key fingerprint is: Nov 23 01:41:14 localhost cloud-init[919]: SHA256:O+3bk4SWunDBJVbgXm6e+9R4snCN8IhSMcchY+hlYa0 root@np0005532584.novalocal Nov 23 01:41:14 localhost cloud-init[919]: The key's randomart image is: Nov 23 01:41:14 localhost cloud-init[919]: +---[ECDSA 256]---+ Nov 23 01:41:14 localhost cloud-init[919]: | .B+o | Nov 23 01:41:14 localhost cloud-init[919]: | .+o=.. | Nov 23 01:41:14 localhost cloud-init[919]: | . o*.= | Nov 23 01:41:14 localhost cloud-init[919]: | .+EO | Nov 23 01:41:14 localhost cloud-init[919]: | S +o | Nov 23 01:41:14 localhost cloud-init[919]: | . B+=.= | Nov 23 01:41:14 localhost cloud-init[919]: | o =o*.B.+ | Nov 23 01:41:14 localhost cloud-init[919]: | +.o *o+ | Nov 23 01:41:14 localhost cloud-init[919]: | ..+o+. | Nov 23 01:41:14 localhost cloud-init[919]: +----[SHA256]-----+ Nov 23 01:41:14 localhost cloud-init[919]: Generating public/private ed25519 key pair. Nov 23 01:41:14 localhost cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key Nov 23 01:41:14 localhost cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub Nov 23 01:41:14 localhost cloud-init[919]: The key fingerprint is: Nov 23 01:41:14 localhost cloud-init[919]: SHA256:Qp0UAAidM9adehdZdmcv6IeiJF8t6ZUkIHsOOR+tTnM root@np0005532584.novalocal Nov 23 01:41:14 localhost cloud-init[919]: The key's randomart image is: Nov 23 01:41:14 localhost cloud-init[919]: +--[ED25519 256]--+ Nov 23 01:41:14 localhost cloud-init[919]: |.o +.ooo++o . o | Nov 23 01:41:14 localhost cloud-init[919]: | B . o*o= . + . | Nov 23 01:41:14 localhost cloud-init[919]: | . o .* =.o o . .| Nov 23 01:41:14 localhost cloud-init[919]: | ...*.o * o . | Nov 23 01:41:14 localhost cloud-init[919]: | .o.S E * . | Nov 23 01:41:14 localhost cloud-init[919]: | B * + . | Nov 23 01:41:14 localhost cloud-init[919]: | + . | Nov 23 01:41:14 localhost cloud-init[919]: | | Nov 23 01:41:14 localhost cloud-init[919]: | | Nov 23 01:41:14 localhost cloud-init[919]: +----[SHA256]-----+ Nov 23 01:41:14 localhost sm-notify[1132]: Version 2.5.4 starting Nov 23 01:41:14 localhost systemd[1]: Finished Initial cloud-init job (metadata service crawler). Nov 23 01:41:14 localhost systemd[1]: Reached target Cloud-config availability. Nov 23 01:41:14 localhost systemd[1]: Reached target Network is Online. Nov 23 01:41:14 localhost systemd[1]: Starting Apply the settings specified in cloud-config... Nov 23 01:41:14 localhost sshd[1133]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:41:14 localhost systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot). Nov 23 01:41:14 localhost systemd[1]: Starting Crash recovery kernel arming... Nov 23 01:41:14 localhost systemd[1]: Starting Notify NFS peers of a restart... Nov 23 01:41:14 localhost sshd[1142]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:41:14 localhost systemd[1]: Starting OpenSSH server daemon... Nov 23 01:41:14 localhost systemd[1]: Starting Permit User Sessions... Nov 23 01:41:14 localhost systemd[1]: Started Notify NFS peers of a restart. Nov 23 01:41:14 localhost systemd[1]: Finished Permit User Sessions. Nov 23 01:41:14 localhost systemd[1]: Started Command Scheduler. Nov 23 01:41:14 localhost systemd[1]: Started Getty on tty1. Nov 23 01:41:14 localhost systemd[1]: Started Serial Getty on ttyS0. Nov 23 01:41:14 localhost systemd[1]: Reached target Login Prompts. Nov 23 01:41:14 localhost systemd[1]: Started OpenSSH server daemon. Nov 23 01:41:14 localhost systemd[1]: Reached target Multi-User System. Nov 23 01:41:14 localhost systemd[1]: Starting Record Runlevel Change in UTMP... Nov 23 01:41:14 localhost systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 23 01:41:14 localhost systemd[1]: Finished Record Runlevel Change in UTMP. Nov 23 01:41:14 localhost sshd[1153]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:41:14 localhost sshd[1171]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:41:14 localhost sshd[1180]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:41:14 localhost kdumpctl[1139]: kdump: No kdump initial ramdisk found. Nov 23 01:41:14 localhost kdumpctl[1139]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img Nov 23 01:41:14 localhost sshd[1193]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:41:14 localhost sshd[1199]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:41:14 localhost sshd[1223]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:41:14 localhost sshd[1254]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:41:14 localhost sshd[1262]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:41:14 localhost cloud-init[1287]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Sun, 23 Nov 2025 06:41:14 +0000. Up 11.01 seconds. Nov 23 01:41:14 localhost systemd[1]: Finished Apply the settings specified in cloud-config. Nov 23 01:41:14 localhost systemd[1]: Starting Execute cloud user/final scripts... Nov 23 01:41:14 localhost dracut[1435]: dracut-057-21.git20230214.el9 Nov 23 01:41:14 localhost cloud-init[1453]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Sun, 23 Nov 2025 06:41:14 +0000. Up 11.38 seconds. Nov 23 01:41:14 localhost dracut[1437]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64 Nov 23 01:41:15 localhost cloud-init[1493]: ############################################################# Nov 23 01:41:15 localhost cloud-init[1499]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Nov 23 01:41:15 localhost cloud-init[1506]: 256 SHA256:O+3bk4SWunDBJVbgXm6e+9R4snCN8IhSMcchY+hlYa0 root@np0005532584.novalocal (ECDSA) Nov 23 01:41:15 localhost cloud-init[1517]: 256 SHA256:Qp0UAAidM9adehdZdmcv6IeiJF8t6ZUkIHsOOR+tTnM root@np0005532584.novalocal (ED25519) Nov 23 01:41:15 localhost cloud-init[1524]: 3072 SHA256:JphQOeDgHcYf6jX26pg5Q0qizKKv+dN0wgnWsQhB99U root@np0005532584.novalocal (RSA) Nov 23 01:41:15 localhost cloud-init[1528]: -----END SSH HOST KEY FINGERPRINTS----- Nov 23 01:41:15 localhost cloud-init[1531]: ############################################################# Nov 23 01:41:15 localhost cloud-init[1453]: Cloud-init v. 22.1-9.el9 finished at Sun, 23 Nov 2025 06:41:15 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0]. Up 11.61 seconds Nov 23 01:41:15 localhost dracut[1437]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Nov 23 01:41:15 localhost systemd[1]: Reloading Network Manager... Nov 23 01:41:15 localhost dracut[1437]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Nov 23 01:41:15 localhost NetworkManager[789]: [1763880075.2411] audit: op="reload" arg="0" pid=1595 uid=0 result="success" Nov 23 01:41:15 localhost NetworkManager[789]: [1763880075.2420] config: signal: SIGHUP (no changes from disk) Nov 23 01:41:15 localhost dracut[1437]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Nov 23 01:41:15 localhost systemd[1]: Reloaded Network Manager. Nov 23 01:41:15 localhost systemd[1]: Finished Execute cloud user/final scripts. Nov 23 01:41:15 localhost systemd[1]: Reached target Cloud-init target. Nov 23 01:41:15 localhost dracut[1437]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Nov 23 01:41:15 localhost dracut[1437]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Nov 23 01:41:15 localhost chronyd[766]: Selected source 50.43.156.177 (2.rhel.pool.ntp.org) Nov 23 01:41:15 localhost chronyd[766]: System clock TAI offset set to 37 seconds Nov 23 01:41:15 localhost dracut[1437]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Nov 23 01:41:15 localhost dracut[1437]: memstrack is not available Nov 23 01:41:15 localhost dracut[1437]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Nov 23 01:41:15 localhost dracut[1437]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Nov 23 01:41:15 localhost dracut[1437]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Nov 23 01:41:15 localhost dracut[1437]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Nov 23 01:41:15 localhost dracut[1437]: memstrack is not available Nov 23 01:41:15 localhost dracut[1437]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Nov 23 01:41:16 localhost dracut[1437]: *** Including module: systemd *** Nov 23 01:41:16 localhost dracut[1437]: *** Including module: systemd-initrd *** Nov 23 01:41:16 localhost dracut[1437]: *** Including module: i18n *** Nov 23 01:41:16 localhost dracut[1437]: No KEYMAP configured. Nov 23 01:41:16 localhost dracut[1437]: *** Including module: drm *** Nov 23 01:41:16 localhost dracut[1437]: *** Including module: prefixdevname *** Nov 23 01:41:16 localhost dracut[1437]: *** Including module: kernel-modules *** Nov 23 01:41:16 localhost chronyd[766]: Selected source 167.160.187.12 (2.rhel.pool.ntp.org) Nov 23 01:41:17 localhost dracut[1437]: *** Including module: kernel-modules-extra *** Nov 23 01:41:17 localhost dracut[1437]: *** Including module: qemu *** Nov 23 01:41:17 localhost dracut[1437]: *** Including module: fstab-sys *** Nov 23 01:41:17 localhost dracut[1437]: *** Including module: rootfs-block *** Nov 23 01:41:17 localhost dracut[1437]: *** Including module: terminfo *** Nov 23 01:41:17 localhost dracut[1437]: *** Including module: udev-rules *** Nov 23 01:41:17 localhost dracut[1437]: Skipping udev rule: 91-permissions.rules Nov 23 01:41:17 localhost dracut[1437]: Skipping udev rule: 80-drivers-modprobe.rules Nov 23 01:41:18 localhost dracut[1437]: *** Including module: virtiofs *** Nov 23 01:41:18 localhost dracut[1437]: *** Including module: dracut-systemd *** Nov 23 01:41:18 localhost dracut[1437]: *** Including module: usrmount *** Nov 23 01:41:18 localhost dracut[1437]: *** Including module: base *** Nov 23 01:41:18 localhost dracut[1437]: *** Including module: fs-lib *** Nov 23 01:41:18 localhost dracut[1437]: *** Including module: kdumpbase *** Nov 23 01:41:18 localhost dracut[1437]: *** Including module: microcode_ctl-fw_dir_override *** Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl module: mangling fw_dir Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: configuration "intel" is ignored Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: configuration "intel-06-2d-07" is ignored Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: configuration "intel-06-4e-03" is ignored Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: configuration "intel-06-4f-01" is ignored Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: configuration "intel-06-55-04" is ignored Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: configuration "intel-06-5e-03" is ignored Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: configuration "intel-06-8c-01" is ignored Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"... Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored Nov 23 01:41:18 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"... Nov 23 01:41:19 localhost dracut[1437]: microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored Nov 23 01:41:19 localhost dracut[1437]: microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware" Nov 23 01:41:19 localhost dracut[1437]: *** Including module: shutdown *** Nov 23 01:41:19 localhost dracut[1437]: *** Including module: squash *** Nov 23 01:41:19 localhost dracut[1437]: *** Including modules done *** Nov 23 01:41:19 localhost dracut[1437]: *** Installing kernel module dependencies *** Nov 23 01:41:19 localhost dracut[1437]: *** Installing kernel module dependencies done *** Nov 23 01:41:19 localhost dracut[1437]: *** Resolving executable dependencies *** Nov 23 01:41:20 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Nov 23 01:41:21 localhost dracut[1437]: *** Resolving executable dependencies done *** Nov 23 01:41:21 localhost dracut[1437]: *** Hardlinking files *** Nov 23 01:41:21 localhost dracut[1437]: Mode: real Nov 23 01:41:21 localhost dracut[1437]: Files: 1099 Nov 23 01:41:21 localhost dracut[1437]: Linked: 3 files Nov 23 01:41:21 localhost dracut[1437]: Compared: 0 xattrs Nov 23 01:41:21 localhost dracut[1437]: Compared: 373 files Nov 23 01:41:21 localhost dracut[1437]: Saved: 61.04 KiB Nov 23 01:41:21 localhost dracut[1437]: Duration: 0.042145 seconds Nov 23 01:41:21 localhost dracut[1437]: *** Hardlinking files done *** Nov 23 01:41:21 localhost dracut[1437]: Could not find 'strip'. Not stripping the initramfs. Nov 23 01:41:21 localhost dracut[1437]: *** Generating early-microcode cpio image *** Nov 23 01:41:21 localhost dracut[1437]: *** Constructing AuthenticAMD.bin *** Nov 23 01:41:21 localhost dracut[1437]: *** Store current command line parameters *** Nov 23 01:41:21 localhost dracut[1437]: Stored kernel commandline: Nov 23 01:41:21 localhost dracut[1437]: No dracut internal kernel commandline stored in the initramfs Nov 23 01:41:21 localhost dracut[1437]: *** Install squash loader *** Nov 23 01:41:21 localhost dracut[1437]: *** Squashing the files inside the initramfs *** Nov 23 01:41:22 localhost dracut[1437]: *** Squashing the files inside the initramfs done *** Nov 23 01:41:22 localhost dracut[1437]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' *** Nov 23 01:41:23 localhost dracut[1437]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done *** Nov 23 01:41:23 localhost kdumpctl[1139]: kdump: kexec: loaded kdump kernel Nov 23 01:41:23 localhost kdumpctl[1139]: kdump: Starting kdump: [OK] Nov 23 01:41:23 localhost systemd[1]: Finished Crash recovery kernel arming. Nov 23 01:41:23 localhost systemd[1]: Startup finished in 1.445s (kernel) + 1.908s (initrd) + 16.778s (userspace) = 20.132s. Nov 23 01:41:40 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 23 01:41:42 localhost sshd[4176]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:41:43 localhost systemd[1]: Created slice User Slice of UID 1000. Nov 23 01:41:43 localhost systemd[1]: Starting User Runtime Directory /run/user/1000... Nov 23 01:41:43 localhost systemd-logind[760]: New session 1 of user zuul. Nov 23 01:41:43 localhost systemd[1]: Finished User Runtime Directory /run/user/1000. Nov 23 01:41:43 localhost systemd[1]: Starting User Manager for UID 1000... Nov 23 01:41:43 localhost systemd[4180]: Queued start job for default target Main User Target. Nov 23 01:41:43 localhost systemd[4180]: Created slice User Application Slice. Nov 23 01:41:43 localhost systemd[4180]: Started Mark boot as successful after the user session has run 2 minutes. Nov 23 01:41:43 localhost systemd[4180]: Started Daily Cleanup of User's Temporary Directories. Nov 23 01:41:43 localhost systemd[4180]: Reached target Paths. Nov 23 01:41:43 localhost systemd[4180]: Reached target Timers. Nov 23 01:41:43 localhost systemd[4180]: Starting D-Bus User Message Bus Socket... Nov 23 01:41:43 localhost systemd[4180]: Starting Create User's Volatile Files and Directories... Nov 23 01:41:43 localhost systemd[4180]: Finished Create User's Volatile Files and Directories. Nov 23 01:41:43 localhost systemd[4180]: Listening on D-Bus User Message Bus Socket. Nov 23 01:41:43 localhost systemd[4180]: Reached target Sockets. Nov 23 01:41:43 localhost systemd[4180]: Reached target Basic System. Nov 23 01:41:43 localhost systemd[4180]: Reached target Main User Target. Nov 23 01:41:43 localhost systemd[4180]: Startup finished in 117ms. Nov 23 01:41:43 localhost systemd[1]: Started User Manager for UID 1000. Nov 23 01:41:43 localhost systemd[1]: Started Session 1 of User zuul. Nov 23 01:41:43 localhost python3[4232]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 01:41:52 localhost python3[4251]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 01:42:00 localhost python3[4304]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 01:42:01 localhost python3[4334]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present Nov 23 01:42:04 localhost python3[4350]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUOGMsJ/AVVgY0xZSxHG1Oo8GuV12NJwOZUmjQnvkVRDmS001A30AXXR0N8dst9ZQnVZ7t0kBrbCnuEI8SNNpeziiyScPCPK73L2zyV8Js+qKXswkPpolOCRy92kOph1OZYuXdhUodUSdBoJ4mwf2s5nJhAJmH6XvfiqHUaCRd9Gp9NgU9FvjG41eO7BwkjRpKTg2jZAy21PGLDeWxRI5qxEpDgdXeW0riuuVHj1FGKKfC1wAe7xB5wykXcRkuog4VlSx2/V+mPpSMDZ1POsAxKOAMYkxfj+qoDIBfDc0R1cbxFehgmCHc8a4z+IjP5eiUvX3HjeV7ZBTR5hkYKHAJfeU6Cj5HQsTwwJrc+oHuosokgJ/ct0+WpvqhalUoL8dpoLUY6PQq+5CeOJrpZeLzXZTIPLWTA4jbbkHa/SwmAk07+hpxpFz3NhSfpT4GfOgKnowPfo+3mJMDAetTMZpizTdfPfc13gl7Zyqb9cB8lgx1IVzN6ZrxPyvPqj05uPk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:04 localhost python3[4364]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:42:06 localhost python3[4423]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 01:42:06 localhost python3[4464]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763880125.9029481-388-22839749702437/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=4c10693c6dd746ababb601eafce6af01_id_rsa follow=False checksum=4877a9422cfa308f85d093f4f170aa8e2f5129bc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:42:07 localhost python3[4537]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 01:42:08 localhost python3[4578]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763880127.595664-487-221169947332875/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=4c10693c6dd746ababb601eafce6af01_id_rsa.pub follow=False checksum=9e6358c9dcdfe108c4f779a3b698bd3c9d97da46 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:42:10 localhost python3[4606]: ansible-ping Invoked with data=pong Nov 23 01:42:12 localhost python3[4620]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 01:42:15 localhost python3[4673]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None Nov 23 01:42:18 localhost python3[4695]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:42:18 localhost python3[4709]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:42:19 localhost python3[4723]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:42:20 localhost python3[4737]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:42:20 localhost python3[4751]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:42:20 localhost python3[4765]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:42:23 localhost python3[4781]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:42:24 localhost python3[4829]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 01:42:25 localhost python3[4872]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763880144.563407-98-162276902265612/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:42:32 localhost python3[4901]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:32 localhost python3[4915]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:33 localhost python3[4929]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:33 localhost python3[4943]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:33 localhost python3[4957]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:33 localhost python3[4971]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:34 localhost python3[4985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:34 localhost python3[4999]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:34 localhost python3[5013]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:34 localhost python3[5027]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:35 localhost python3[5041]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:35 localhost python3[5055]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:35 localhost python3[5069]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:36 localhost python3[5083]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:36 localhost python3[5097]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:36 localhost python3[5111]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:36 localhost python3[5125]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:37 localhost python3[5139]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:37 localhost python3[5153]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:37 localhost python3[5167]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:37 localhost python3[5181]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:38 localhost python3[5195]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:38 localhost python3[5209]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:38 localhost python3[5223]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:39 localhost python3[5237]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:39 localhost python3[5251]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:42:40 localhost python3[5267]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Nov 23 01:42:40 localhost systemd[1]: Starting Time & Date Service... Nov 23 01:42:41 localhost systemd[1]: Started Time & Date Service. Nov 23 01:42:41 localhost systemd-timedated[5269]: Changed time zone to 'UTC' (UTC). Nov 23 01:42:42 localhost python3[5288]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:42:43 localhost python3[5334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 01:42:43 localhost python3[5375]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1763880163.208991-490-16592287770937/source _original_basename=tmp64uah7vx follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:42:45 localhost python3[5435]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 01:42:45 localhost python3[5476]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763880164.7712486-581-142721809837001/source _original_basename=tmpym_j8yr_ follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:42:47 localhost python3[5538]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 01:42:47 localhost python3[5581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763880166.8437784-725-237406733630031/source _original_basename=tmp9353b73a follow=False checksum=df338bc5c1a182efd6b70f0a857d9251ec8bada6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:42:48 localhost python3[5609]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 01:42:48 localhost python3[5625]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 01:42:50 localhost python3[5675]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 01:42:50 localhost python3[5718]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1763880169.7756968-850-62709930505819/source _original_basename=tmp29kxa9xb follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:42:51 localhost python3[5749]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-656c-65aa-000000000023-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 01:42:52 localhost python3[5767]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-656c-65aa-000000000024-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None Nov 23 01:42:54 localhost python3[5785]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:43:11 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Nov 23 01:43:14 localhost python3[5804]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:44:01 localhost systemd[4180]: Starting Mark boot as successful... Nov 23 01:44:01 localhost systemd[4180]: Finished Mark boot as successful. Nov 23 01:44:14 localhost systemd-logind[760]: Session 1 logged out. Waiting for processes to exit. Nov 23 01:45:10 localhost systemd[1]: Unmounting EFI System Partition Automount... Nov 23 01:45:10 localhost systemd[1]: efi.mount: Deactivated successfully. Nov 23 01:45:10 localhost systemd[1]: Unmounted EFI System Partition Automount. Nov 23 01:46:28 localhost kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 Nov 23 01:46:28 localhost kernel: pci 0000:00:07.0: reg 0x10: [io 0x0000-0x003f] Nov 23 01:46:28 localhost kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff] Nov 23 01:46:28 localhost kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref] Nov 23 01:46:28 localhost kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] Nov 23 01:46:28 localhost kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref] Nov 23 01:46:28 localhost kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref] Nov 23 01:46:28 localhost kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff] Nov 23 01:46:28 localhost kernel: pci 0000:00:07.0: BAR 0: assigned [io 0x1000-0x103f] Nov 23 01:46:28 localhost kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003) Nov 23 01:46:28 localhost NetworkManager[789]: [1763880388.4583] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Nov 23 01:46:28 localhost systemd-udevd[5811]: Network interface NamePolicy= disabled on kernel command line. Nov 23 01:46:28 localhost NetworkManager[789]: [1763880388.4738] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Nov 23 01:46:28 localhost NetworkManager[789]: [1763880388.4770] settings: (eth1): created default wired connection 'Wired connection 1' Nov 23 01:46:28 localhost NetworkManager[789]: [1763880388.4775] device (eth1): carrier: link connected Nov 23 01:46:28 localhost NetworkManager[789]: [1763880388.4777] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Nov 23 01:46:28 localhost NetworkManager[789]: [1763880388.4783] policy: auto-activating connection 'Wired connection 1' (67c4b7cb-97e2-395b-bb2c-a8f3d1e77fde) Nov 23 01:46:28 localhost NetworkManager[789]: [1763880388.4790] device (eth1): Activation: starting connection 'Wired connection 1' (67c4b7cb-97e2-395b-bb2c-a8f3d1e77fde) Nov 23 01:46:28 localhost NetworkManager[789]: [1763880388.4791] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Nov 23 01:46:28 localhost NetworkManager[789]: [1763880388.4795] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Nov 23 01:46:28 localhost NetworkManager[789]: [1763880388.4803] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Nov 23 01:46:28 localhost NetworkManager[789]: [1763880388.4807] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Nov 23 01:46:29 localhost sshd[5813]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:46:29 localhost systemd-logind[760]: New session 3 of user zuul. Nov 23 01:46:29 localhost systemd[1]: Started Session 3 of User zuul. Nov 23 01:46:29 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Nov 23 01:46:29 localhost python3[5830]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-7c02-8043-00000000039b-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 01:46:42 localhost python3[5881]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 01:46:43 localhost python3[5924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763880402.428952-435-15081337716243/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=72995aa4b81f5fe0356864e8f32ad703ac625e0a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:46:43 localhost python3[5954]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 01:46:43 localhost systemd[1]: NetworkManager-wait-online.service: Deactivated successfully. Nov 23 01:46:43 localhost systemd[1]: Stopped Network Manager Wait Online. Nov 23 01:46:43 localhost systemd[1]: Stopping Network Manager Wait Online... Nov 23 01:46:43 localhost systemd[1]: Stopping Network Manager... Nov 23 01:46:43 localhost NetworkManager[789]: [1763880403.7520] caught SIGTERM, shutting down normally. Nov 23 01:46:43 localhost NetworkManager[789]: [1763880403.7629] dhcp4 (eth0): canceled DHCP transaction Nov 23 01:46:43 localhost NetworkManager[789]: [1763880403.7629] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Nov 23 01:46:43 localhost NetworkManager[789]: [1763880403.7630] dhcp4 (eth0): state changed no lease Nov 23 01:46:43 localhost NetworkManager[789]: [1763880403.7633] manager: NetworkManager state is now CONNECTING Nov 23 01:46:43 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Nov 23 01:46:43 localhost NetworkManager[789]: [1763880403.7730] dhcp4 (eth1): canceled DHCP transaction Nov 23 01:46:43 localhost NetworkManager[789]: [1763880403.7731] dhcp4 (eth1): state changed no lease Nov 23 01:46:43 localhost NetworkManager[789]: [1763880403.7791] exiting (success) Nov 23 01:46:43 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Nov 23 01:46:43 localhost systemd[1]: NetworkManager.service: Deactivated successfully. Nov 23 01:46:43 localhost systemd[1]: Stopped Network Manager. Nov 23 01:46:43 localhost systemd[1]: NetworkManager.service: Consumed 2.002s CPU time. Nov 23 01:46:43 localhost systemd[1]: Starting Network Manager... Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.8350] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:a8bcb6c7-7b47-4821-8efb-ebcb05504f11) Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.8355] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Nov 23 01:46:43 localhost systemd[1]: Started Network Manager. Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.8383] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Nov 23 01:46:43 localhost systemd[1]: Starting Network Manager Wait Online... Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.8446] manager[0x55f916c08090]: monitoring kernel firmware directory '/lib/firmware'. Nov 23 01:46:43 localhost systemd[1]: Starting Hostname Service... Nov 23 01:46:43 localhost systemd[1]: Started Hostname Service. Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9206] hostname: hostname: using hostnamed Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9207] hostname: static hostname changed from (none) to "np0005532584.novalocal" Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9213] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9222] manager[0x55f916c08090]: rfkill: Wi-Fi hardware radio set enabled Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9222] manager[0x55f916c08090]: rfkill: WWAN hardware radio set enabled Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9264] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9264] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9265] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9266] manager: Networking is enabled by state file Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9274] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9275] settings: Loaded settings plugin: keyfile (internal) Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9321] dhcp: init: Using DHCP client 'internal' Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9325] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9333] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9339] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9353] device (lo): Activation: starting connection 'lo' (8cb417fe-36ff-4a47-ac39-711ef6c42393) Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9362] device (eth0): carrier: link connected Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9368] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9375] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated) Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9375] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9384] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9393] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9401] device (eth1): carrier: link connected Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9406] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9413] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (67c4b7cb-97e2-395b-bb2c-a8f3d1e77fde) (indicated) Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9413] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9420] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9428] device (eth1): Activation: starting connection 'Wired connection 1' (67c4b7cb-97e2-395b-bb2c-a8f3d1e77fde) Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9456] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9463] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9465] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9468] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9472] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9474] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9478] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9482] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9488] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9493] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9505] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9508] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9559] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9565] dhcp4 (eth0): state changed new lease, address=38.102.83.248 Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9577] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9587] device (lo): Activation: successful, device activated. Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9597] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9706] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9756] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9762] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9767] manager: NetworkManager state is now CONNECTED_SITE Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9776] device (eth0): Activation: successful, device activated. Nov 23 01:46:43 localhost NetworkManager[5966]: [1763880403.9783] manager: NetworkManager state is now CONNECTED_GLOBAL Nov 23 01:46:44 localhost python3[6027]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-7c02-8043-000000000120-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 01:46:54 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Nov 23 01:47:01 localhost systemd[4180]: Created slice User Background Tasks Slice. Nov 23 01:47:01 localhost systemd[4180]: Starting Cleanup of User's Temporary Files and Directories... Nov 23 01:47:01 localhost systemd[4180]: Finished Cleanup of User's Temporary Files and Directories. Nov 23 01:47:13 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 23 01:47:29 localhost NetworkManager[5966]: [1763880449.5177] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Nov 23 01:47:29 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Nov 23 01:47:29 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Nov 23 01:47:29 localhost NetworkManager[5966]: [1763880449.5398] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Nov 23 01:47:29 localhost NetworkManager[5966]: [1763880449.5401] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Nov 23 01:47:29 localhost NetworkManager[5966]: [1763880449.5417] device (eth1): Activation: successful, device activated. Nov 23 01:47:29 localhost NetworkManager[5966]: [1763880449.5426] manager: startup complete Nov 23 01:47:29 localhost systemd[1]: Finished Network Manager Wait Online. Nov 23 01:47:39 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Nov 23 01:47:44 localhost systemd[1]: session-3.scope: Deactivated successfully. Nov 23 01:47:44 localhost systemd[1]: session-3.scope: Consumed 1.490s CPU time. Nov 23 01:47:44 localhost systemd-logind[760]: Session 3 logged out. Waiting for processes to exit. Nov 23 01:47:44 localhost systemd-logind[760]: Removed session 3. Nov 23 01:48:03 localhost sshd[6058]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:48:56 localhost sshd[6060]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:48:56 localhost systemd-logind[760]: New session 4 of user zuul. Nov 23 01:48:56 localhost systemd[1]: Started Session 4 of User zuul. Nov 23 01:48:57 localhost python3[6111]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 01:48:57 localhost python3[6154]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763880536.7583852-628-243711394743730/source _original_basename=tmpm6pqsw6u follow=False checksum=492625bb7c06d655281f511b293f3f3edc954e6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:49:01 localhost systemd[1]: session-4.scope: Deactivated successfully. Nov 23 01:49:01 localhost systemd-logind[760]: Session 4 logged out. Waiting for processes to exit. Nov 23 01:49:01 localhost systemd-logind[760]: Removed session 4. Nov 23 01:53:57 localhost sshd[6170]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:54:44 localhost sshd[6174]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:54:45 localhost systemd-logind[760]: New session 5 of user zuul. Nov 23 01:54:45 localhost systemd[1]: Started Session 5 of User zuul. Nov 23 01:54:45 localhost python3[6193]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-9f63-9a03-000000001cfc-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 01:54:47 localhost python3[6212]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:54:47 localhost python3[6228]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:54:47 localhost python3[6244]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:54:48 localhost python3[6260]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:54:48 localhost python3[6276]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:54:50 localhost python3[6324]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 01:54:50 localhost python3[6367]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763880889.848365-637-183947325099694/source _original_basename=tmpf4gcl783 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 01:54:52 localhost python3[6397]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 01:54:52 localhost systemd[1]: Reloading. Nov 23 01:54:52 localhost systemd-rc-local-generator[6415]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 01:54:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 01:54:53 localhost python3[6443]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None Nov 23 01:54:54 localhost python3[6459]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 01:54:55 localhost python3[6477]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 01:54:55 localhost python3[6495]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 01:54:55 localhost python3[6513]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 01:54:56 localhost python3[6530]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init"; cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system"; cat /sys/fs/cgroup/system.slice/io.max; echo "user"; cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-9f63-9a03-000000001d03-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 01:54:57 localhost python3[6550]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 01:55:00 localhost systemd[1]: session-5.scope: Deactivated successfully. Nov 23 01:55:00 localhost systemd[1]: session-5.scope: Consumed 3.873s CPU time. Nov 23 01:55:00 localhost systemd-logind[760]: Session 5 logged out. Waiting for processes to exit. Nov 23 01:55:00 localhost systemd-logind[760]: Removed session 5. Nov 23 01:55:44 localhost sshd[6555]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:56:14 localhost sshd[6558]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:56:14 localhost systemd[1]: Starting Cleanup of Temporary Directories... Nov 23 01:56:14 localhost systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Nov 23 01:56:14 localhost systemd[1]: Finished Cleanup of Temporary Directories. Nov 23 01:56:14 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Nov 23 01:56:14 localhost systemd-logind[760]: New session 6 of user zuul. Nov 23 01:56:14 localhost systemd[1]: Started Session 6 of User zuul. Nov 23 01:56:15 localhost systemd[1]: Starting RHSM dbus service... Nov 23 01:56:15 localhost systemd[1]: Started RHSM dbus service. Nov 23 01:56:15 localhost rhsm-service[6584]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Nov 23 01:56:15 localhost rhsm-service[6584]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Nov 23 01:56:15 localhost rhsm-service[6584]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Nov 23 01:56:15 localhost rhsm-service[6584]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Nov 23 01:56:16 localhost rhsm-service[6584]: INFO [subscription_manager.managerlib:90] Consumer created: np0005532584.novalocal (9009da62-c986-416e-b62f-4d365f106d60) Nov 23 01:56:16 localhost subscription-manager[6584]: Registered system with identity: 9009da62-c986-416e-b62f-4d365f106d60 Nov 23 01:56:17 localhost rhsm-service[6584]: INFO [subscription_manager.entcertlib:131] certs updated: Nov 23 01:56:17 localhost rhsm-service[6584]: Total updates: 1 Nov 23 01:56:17 localhost rhsm-service[6584]: Found (local) serial# [] Nov 23 01:56:17 localhost rhsm-service[6584]: Expected (UEP) serial# [9112898567037215611] Nov 23 01:56:17 localhost rhsm-service[6584]: Added (new) Nov 23 01:56:17 localhost rhsm-service[6584]: [sn:9112898567037215611 ( Content Access,) @ /etc/pki/entitlement/9112898567037215611.pem] Nov 23 01:56:17 localhost rhsm-service[6584]: Deleted (rogue): Nov 23 01:56:17 localhost rhsm-service[6584]: Nov 23 01:56:17 localhost subscription-manager[6584]: Added subscription for 'Content Access' contract 'None' Nov 23 01:56:17 localhost subscription-manager[6584]: Added subscription for product ' Content Access' Nov 23 01:56:18 localhost rhsm-service[6584]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Nov 23 01:56:18 localhost rhsm-service[6584]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Nov 23 01:56:18 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 01:56:18 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 01:56:18 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 01:56:18 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 01:56:19 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 01:56:26 localhost python3[6676]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-d6e0-fafb-00000000000d-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 01:56:27 localhost python3[6695]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 01:56:57 localhost setsebool[6770]: The virt_use_nfs policy boolean was changed to 1 by root Nov 23 01:56:57 localhost setsebool[6770]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root Nov 23 01:57:05 localhost kernel: SELinux: Converting 407 SID table entries... Nov 23 01:57:05 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 01:57:05 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 01:57:05 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 01:57:05 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 01:57:05 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 01:57:06 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 01:57:06 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 01:57:18 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=3 res=1 Nov 23 01:57:18 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 01:57:18 localhost systemd[1]: Starting man-db-cache-update.service... Nov 23 01:57:18 localhost systemd[1]: Reloading. Nov 23 01:57:18 localhost systemd-rc-local-generator[7650]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 01:57:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 01:57:18 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 23 01:57:19 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 01:57:20 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 01:57:27 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 23 01:57:27 localhost systemd[1]: Finished man-db-cache-update.service. Nov 23 01:57:27 localhost systemd[1]: man-db-cache-update.service: Consumed 9.564s CPU time. Nov 23 01:57:27 localhost systemd[1]: run-rdb668312986d49128db5583967e56d90.service: Deactivated successfully. Nov 23 01:58:14 localhost systemd[1]: var-lib-containers-storage-overlay-opaque\x2dbug\x2dcheck268478359-merged.mount: Deactivated successfully. Nov 23 01:58:14 localhost podman[18366]: 2025-11-23 06:58:14.371082556 +0000 UTC m=+0.101356729 system refresh Nov 23 01:58:15 localhost systemd[4180]: Starting D-Bus User Message Bus... Nov 23 01:58:15 localhost dbus-broker-launch[18422]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Nov 23 01:58:15 localhost dbus-broker-launch[18422]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Nov 23 01:58:15 localhost systemd[4180]: Started D-Bus User Message Bus. Nov 23 01:58:15 localhost journal[18422]: Ready Nov 23 01:58:15 localhost systemd[4180]: selinux: avc: op=load_policy lsm=selinux seqno=3 res=1 Nov 23 01:58:15 localhost systemd[4180]: Created slice Slice /user. Nov 23 01:58:15 localhost systemd[4180]: podman-18406.scope: unit configures an IP firewall, but not running as root. Nov 23 01:58:15 localhost systemd[4180]: (This warning is only shown for the first unit using IP firewalling.) Nov 23 01:58:15 localhost systemd[4180]: Started podman-18406.scope. Nov 23 01:58:15 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 23 01:58:15 localhost systemd[4180]: Started podman-pause-94939cb3.scope. Nov 23 01:58:17 localhost systemd[1]: session-6.scope: Deactivated successfully. Nov 23 01:58:17 localhost systemd[1]: session-6.scope: Consumed 49.768s CPU time. Nov 23 01:58:17 localhost systemd-logind[760]: Session 6 logged out. Waiting for processes to exit. Nov 23 01:58:17 localhost systemd-logind[760]: Removed session 6. Nov 23 01:58:33 localhost sshd[18426]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:58:33 localhost sshd[18429]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:58:33 localhost sshd[18428]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:58:33 localhost sshd[18430]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:58:33 localhost sshd[18427]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:58:38 localhost sshd[18436]: main: sshd: ssh-rsa algorithm is disabled Nov 23 01:58:38 localhost systemd-logind[760]: New session 7 of user zuul. Nov 23 01:58:38 localhost systemd[1]: Started Session 7 of User zuul. Nov 23 01:58:38 localhost python3[18453]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFp5ffFMuM9GICal8e/QjT+yRcsIbGaMWRt/HA7rb1TB5YKChpgkSmzIFogHU4gX8uce12LB+CRf7ndL6kzcKrg= zuul@np0005532578.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:58:39 localhost python3[18469]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFp5ffFMuM9GICal8e/QjT+yRcsIbGaMWRt/HA7rb1TB5YKChpgkSmzIFogHU4gX8uce12LB+CRf7ndL6kzcKrg= zuul@np0005532578.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 01:58:41 localhost systemd[1]: session-7.scope: Deactivated successfully. Nov 23 01:58:41 localhost systemd-logind[760]: Session 7 logged out. Waiting for processes to exit. Nov 23 01:58:41 localhost systemd-logind[760]: Removed session 7. Nov 23 01:59:21 localhost sshd[18471]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:00:08 localhost sshd[18474]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:00:08 localhost systemd-logind[760]: New session 8 of user zuul. Nov 23 02:00:08 localhost systemd[1]: Started Session 8 of User zuul. Nov 23 02:00:08 localhost python3[18493]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUOGMsJ/AVVgY0xZSxHG1Oo8GuV12NJwOZUmjQnvkVRDmS001A30AXXR0N8dst9ZQnVZ7t0kBrbCnuEI8SNNpeziiyScPCPK73L2zyV8Js+qKXswkPpolOCRy92kOph1OZYuXdhUodUSdBoJ4mwf2s5nJhAJmH6XvfiqHUaCRd9Gp9NgU9FvjG41eO7BwkjRpKTg2jZAy21PGLDeWxRI5qxEpDgdXeW0riuuVHj1FGKKfC1wAe7xB5wykXcRkuog4VlSx2/V+mPpSMDZ1POsAxKOAMYkxfj+qoDIBfDc0R1cbxFehgmCHc8a4z+IjP5eiUvX3HjeV7ZBTR5hkYKHAJfeU6Cj5HQsTwwJrc+oHuosokgJ/ct0+WpvqhalUoL8dpoLUY6PQq+5CeOJrpZeLzXZTIPLWTA4jbbkHa/SwmAk07+hpxpFz3NhSfpT4GfOgKnowPfo+3mJMDAetTMZpizTdfPfc13gl7Zyqb9cB8lgx1IVzN6ZrxPyvPqj05uPk= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 02:00:09 localhost python3[18509]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005532584.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Nov 23 02:00:11 localhost python3[18559]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:00:11 localhost python3[18602]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763881210.744358-131-47916935034430/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=4c10693c6dd746ababb601eafce6af01_id_rsa follow=False checksum=4877a9422cfa308f85d093f4f170aa8e2f5129bc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:00:12 localhost python3[18664]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:00:12 localhost python3[18707]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763881212.3488133-221-10190742317573/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=4c10693c6dd746ababb601eafce6af01_id_rsa.pub follow=False checksum=9e6358c9dcdfe108c4f779a3b698bd3c9d97da46 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:00:14 localhost python3[18737]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:00:15 localhost python3[18783]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:00:16 localhost python3[18799]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmpvu9_tvki recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:00:17 localhost python3[18859]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:00:17 localhost python3[18875]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmp1wz74wh2 recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:00:19 localhost python3[18935]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:00:19 localhost python3[18951]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmp1_qk5yy6 recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:00:19 localhost systemd[1]: session-8.scope: Deactivated successfully. Nov 23 02:00:19 localhost systemd[1]: session-8.scope: Consumed 3.484s CPU time. Nov 23 02:00:19 localhost systemd-logind[760]: Session 8 logged out. Waiting for processes to exit. Nov 23 02:00:19 localhost systemd-logind[760]: Removed session 8. Nov 23 02:02:32 localhost sshd[18982]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:02:32 localhost systemd-logind[760]: New session 9 of user zuul. Nov 23 02:02:32 localhost systemd[1]: Started Session 9 of User zuul. Nov 23 02:02:32 localhost python3[19028]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:05:50 localhost sshd[19031]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:05:50 localhost sshd[19032]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:07:32 localhost systemd[1]: session-9.scope: Deactivated successfully. Nov 23 02:07:32 localhost systemd-logind[760]: Session 9 logged out. Waiting for processes to exit. Nov 23 02:07:32 localhost systemd-logind[760]: Removed session 9. Nov 23 02:11:34 localhost sshd[19038]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:11:50 localhost sshd[19040]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:13:38 localhost sshd[19043]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:13:38 localhost systemd-logind[760]: New session 10 of user zuul. Nov 23 02:13:38 localhost systemd[1]: Started Session 10 of User zuul. Nov 23 02:13:38 localhost python3[19060]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-1b67-9ac8-00000000000c-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:13:40 localhost python3[19080]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163ef9-e89a-1b67-9ac8-00000000000d-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:13:45 localhost python3[19100]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False Nov 23 02:13:48 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 02:13:48 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 02:14:42 localhost python3[19258]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False Nov 23 02:14:45 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 02:14:45 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 02:14:53 localhost python3[19401]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False Nov 23 02:14:56 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 02:15:01 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 02:15:02 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 02:15:25 localhost python3[19735]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False Nov 23 02:15:28 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 02:15:28 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 02:15:33 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 02:15:33 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 02:15:56 localhost python3[20071]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False Nov 23 02:15:59 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 02:16:04 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 02:16:04 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 02:16:30 localhost python3[20349]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1b67-9ac8-000000000013-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:16:35 localhost python3[20368]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 02:16:56 localhost kernel: SELinux: Converting 490 SID table entries... Nov 23 02:16:56 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 02:16:56 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 02:16:56 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 02:16:56 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 02:16:56 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 02:16:56 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 02:16:56 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 02:16:56 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=4 res=1 Nov 23 02:16:56 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC. Nov 23 02:16:59 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 02:16:59 localhost systemd[1]: Starting man-db-cache-update.service... Nov 23 02:16:59 localhost systemd[1]: Reloading. Nov 23 02:16:59 localhost systemd-sysv-generator[21035]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:16:59 localhost systemd-rc-local-generator[21029]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:16:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:16:59 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 23 02:17:00 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 23 02:17:00 localhost systemd[1]: Finished man-db-cache-update.service. Nov 23 02:17:00 localhost systemd[1]: run-r2d91b6650cb846418054780c8eb92b18.service: Deactivated successfully. Nov 23 02:17:01 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 02:17:01 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 02:17:27 localhost python3[21578]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1b67-9ac8-000000000015-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:17:55 localhost python3[21598]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:17:55 localhost python3[21646]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:17:56 localhost python3[21689]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763882275.5440493-291-81231145668835/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=3358dfc6c6ce646155135d0cad900026cb34ba08 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:17:57 localhost python3[21719]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Nov 23 02:17:57 localhost systemd-journald[617]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 89.2 (297 of 333 items), suggesting rotation. Nov 23 02:17:57 localhost systemd-journald[617]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 23 02:17:57 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 02:17:57 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 02:17:57 localhost python3[21740]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Nov 23 02:17:58 localhost python3[21760]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Nov 23 02:17:58 localhost python3[21780]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Nov 23 02:17:59 localhost python3[21800]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Nov 23 02:18:01 localhost python3[21820]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 02:18:01 localhost systemd[1]: Starting LSB: Bring up/down networking... Nov 23 02:18:01 localhost network[21823]: WARN : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 23 02:18:01 localhost network[21834]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 23 02:18:01 localhost network[21823]: WARN : [network] 'network-scripts' will be removed from distribution in near future. Nov 23 02:18:01 localhost network[21835]: 'network-scripts' will be removed from distribution in near future. Nov 23 02:18:01 localhost network[21823]: WARN : [network] It is advised to switch to 'NetworkManager' instead for network management. Nov 23 02:18:01 localhost network[21836]: It is advised to switch to 'NetworkManager' instead for network management. Nov 23 02:18:01 localhost NetworkManager[5966]: [1763882281.3323] audit: op="connections-reload" pid=21864 uid=0 result="success" Nov 23 02:18:01 localhost network[21823]: Bringing up loopback interface: [ OK ] Nov 23 02:18:01 localhost NetworkManager[5966]: [1763882281.5317] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=21952 uid=0 result="success" Nov 23 02:18:01 localhost network[21823]: Bringing up interface eth0: [ OK ] Nov 23 02:18:01 localhost systemd[1]: Started LSB: Bring up/down networking. Nov 23 02:18:01 localhost python3[21994]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 02:18:02 localhost systemd[1]: Starting Open vSwitch Database Unit... Nov 23 02:18:02 localhost chown[21998]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Nov 23 02:18:02 localhost ovs-ctl[22003]: /etc/openvswitch/conf.db does not exist ... (warning). Nov 23 02:18:02 localhost ovs-ctl[22003]: Creating empty database /etc/openvswitch/conf.db [ OK ] Nov 23 02:18:02 localhost ovs-ctl[22003]: Starting ovsdb-server [ OK ] Nov 23 02:18:02 localhost ovs-vsctl[22053]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1 Nov 23 02:18:02 localhost ovs-vsctl[22074]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.6-141.el9fdp "external-ids:system-id=\"ade391ff-62a6-48e9-b6e8-1a8b190070d2\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\"" Nov 23 02:18:02 localhost ovs-ctl[22003]: Configuring Open vSwitch system IDs [ OK ] Nov 23 02:18:02 localhost ovs-ctl[22003]: Enabling remote OVSDB managers [ OK ] Nov 23 02:18:02 localhost ovs-vsctl[22080]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005532584.novalocal Nov 23 02:18:02 localhost systemd[1]: Started Open vSwitch Database Unit. Nov 23 02:18:02 localhost systemd[1]: Starting Open vSwitch Delete Transient Ports... Nov 23 02:18:02 localhost systemd[1]: Finished Open vSwitch Delete Transient Ports. Nov 23 02:18:02 localhost systemd[1]: Starting Open vSwitch Forwarding Unit... Nov 23 02:18:02 localhost kernel: openvswitch: Open vSwitch switching datapath Nov 23 02:18:02 localhost ovs-ctl[22124]: Inserting openvswitch module [ OK ] Nov 23 02:18:02 localhost ovs-ctl[22093]: Starting ovs-vswitchd [ OK ] Nov 23 02:18:02 localhost ovs-vsctl[22143]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005532584.novalocal Nov 23 02:18:02 localhost ovs-ctl[22093]: Enabling remote OVSDB managers [ OK ] Nov 23 02:18:02 localhost systemd[1]: Started Open vSwitch Forwarding Unit. Nov 23 02:18:02 localhost systemd[1]: Starting Open vSwitch... Nov 23 02:18:02 localhost systemd[1]: Finished Open vSwitch. Nov 23 02:18:05 localhost python3[22161]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1b67-9ac8-00000000001a-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:18:06 localhost NetworkManager[5966]: [1763882286.2652] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22319 uid=0 result="success" Nov 23 02:18:06 localhost ifup[22320]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 23 02:18:06 localhost ifup[22321]: 'network-scripts' will be removed from distribution in near future. Nov 23 02:18:06 localhost ifup[22322]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 23 02:18:06 localhost NetworkManager[5966]: [1763882286.2916] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22328 uid=0 result="success" Nov 23 02:18:06 localhost ovs-vsctl[22330]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:ed:8d:9e -- set bridge br-ex fail_mode=standalone -- del-controller br-ex Nov 23 02:18:06 localhost kernel: device ovs-system entered promiscuous mode Nov 23 02:18:06 localhost NetworkManager[5966]: [1763882286.3181] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4) Nov 23 02:18:06 localhost systemd-udevd[22331]: Network interface NamePolicy= disabled on kernel command line. Nov 23 02:18:06 localhost kernel: Timeout policy base is empty Nov 23 02:18:06 localhost kernel: Failed to associated timeout policy `ovs_test_tp' Nov 23 02:18:06 localhost kernel: device br-ex entered promiscuous mode Nov 23 02:18:06 localhost NetworkManager[5966]: [1763882286.3621] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5) Nov 23 02:18:06 localhost NetworkManager[5966]: [1763882286.3864] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22357 uid=0 result="success" Nov 23 02:18:06 localhost NetworkManager[5966]: [1763882286.4083] device (br-ex): carrier: link connected Nov 23 02:18:09 localhost NetworkManager[5966]: [1763882289.3341] ndisc[0x55f916cae190,"eth1"]: solicit: failure sending router solicitation: Network is unreachable (101) Nov 23 02:18:09 localhost NetworkManager[5966]: [1763882289.4602] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22386 uid=0 result="success" Nov 23 02:18:09 localhost NetworkManager[5966]: [1763882289.5045] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22401 uid=0 result="success" Nov 23 02:18:09 localhost NET[22426]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf Nov 23 02:18:09 localhost NetworkManager[5966]: [1763882289.5894] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed') Nov 23 02:18:09 localhost NetworkManager[5966]: [1763882289.6019] dhcp4 (eth1): canceled DHCP transaction Nov 23 02:18:09 localhost NetworkManager[5966]: [1763882289.6019] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Nov 23 02:18:09 localhost NetworkManager[5966]: [1763882289.6019] dhcp4 (eth1): state changed no lease Nov 23 02:18:09 localhost NetworkManager[5966]: [1763882289.6052] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22435 uid=0 result="success" Nov 23 02:18:09 localhost ifup[22436]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 23 02:18:09 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Nov 23 02:18:09 localhost ifup[22437]: 'network-scripts' will be removed from distribution in near future. Nov 23 02:18:09 localhost ifup[22439]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 23 02:18:09 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Nov 23 02:18:09 localhost NetworkManager[5966]: [1763882289.6389] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22453 uid=0 result="success" Nov 23 02:18:09 localhost NetworkManager[5966]: [1763882289.7239] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22463 uid=0 result="success" Nov 23 02:18:09 localhost NetworkManager[5966]: [1763882289.7309] device (eth1): carrier: link connected Nov 23 02:18:09 localhost NetworkManager[5966]: [1763882289.7527] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22472 uid=0 result="success" Nov 23 02:18:09 localhost ipv6_wait_tentative[22484]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Nov 23 02:18:10 localhost ipv6_wait_tentative[22489]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Nov 23 02:18:11 localhost NetworkManager[5966]: [1763882291.8220] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22498 uid=0 result="success" Nov 23 02:18:11 localhost ovs-vsctl[22513]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1 Nov 23 02:18:11 localhost kernel: device eth1 entered promiscuous mode Nov 23 02:18:11 localhost NetworkManager[5966]: [1763882291.8956] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22521 uid=0 result="success" Nov 23 02:18:11 localhost ifup[22522]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 23 02:18:11 localhost ifup[22523]: 'network-scripts' will be removed from distribution in near future. Nov 23 02:18:11 localhost ifup[22524]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 23 02:18:11 localhost NetworkManager[5966]: [1763882291.9267] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22530 uid=0 result="success" Nov 23 02:18:11 localhost NetworkManager[5966]: [1763882291.9683] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22540 uid=0 result="success" Nov 23 02:18:11 localhost ifup[22541]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 23 02:18:11 localhost ifup[22542]: 'network-scripts' will be removed from distribution in near future. Nov 23 02:18:11 localhost ifup[22543]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 23 02:18:11 localhost NetworkManager[5966]: [1763882291.9988] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22549 uid=0 result="success" Nov 23 02:18:12 localhost ovs-vsctl[22552]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Nov 23 02:18:12 localhost kernel: device vlan22 entered promiscuous mode Nov 23 02:18:12 localhost NetworkManager[5966]: [1763882292.0403] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/6) Nov 23 02:18:12 localhost systemd-udevd[22554]: Network interface NamePolicy= disabled on kernel command line. Nov 23 02:18:12 localhost NetworkManager[5966]: [1763882292.0665] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22563 uid=0 result="success" Nov 23 02:18:12 localhost NetworkManager[5966]: [1763882292.0877] device (vlan22): carrier: link connected Nov 23 02:18:15 localhost NetworkManager[5966]: [1763882295.1394] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22592 uid=0 result="success" Nov 23 02:18:15 localhost NetworkManager[5966]: [1763882295.1820] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22607 uid=0 result="success" Nov 23 02:18:15 localhost NetworkManager[5966]: [1763882295.2332] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22628 uid=0 result="success" Nov 23 02:18:15 localhost ifup[22629]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 23 02:18:15 localhost ifup[22630]: 'network-scripts' will be removed from distribution in near future. Nov 23 02:18:15 localhost ifup[22631]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 23 02:18:15 localhost NetworkManager[5966]: [1763882295.2593] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22637 uid=0 result="success" Nov 23 02:18:15 localhost ovs-vsctl[22640]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Nov 23 02:18:15 localhost systemd-udevd[22642]: Network interface NamePolicy= disabled on kernel command line. Nov 23 02:18:15 localhost kernel: device vlan23 entered promiscuous mode Nov 23 02:18:15 localhost NetworkManager[5966]: [1763882295.2957] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/7) Nov 23 02:18:15 localhost NetworkManager[5966]: [1763882295.3195] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22652 uid=0 result="success" Nov 23 02:18:15 localhost NetworkManager[5966]: [1763882295.3379] device (vlan23): carrier: link connected Nov 23 02:18:18 localhost NetworkManager[5966]: [1763882298.3837] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22682 uid=0 result="success" Nov 23 02:18:18 localhost NetworkManager[5966]: [1763882298.4313] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22697 uid=0 result="success" Nov 23 02:18:18 localhost NetworkManager[5966]: [1763882298.4903] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22718 uid=0 result="success" Nov 23 02:18:18 localhost ifup[22719]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 23 02:18:18 localhost ifup[22720]: 'network-scripts' will be removed from distribution in near future. Nov 23 02:18:18 localhost ifup[22721]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 23 02:18:18 localhost NetworkManager[5966]: [1763882298.5221] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22727 uid=0 result="success" Nov 23 02:18:18 localhost ovs-vsctl[22730]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Nov 23 02:18:18 localhost kernel: device vlan21 entered promiscuous mode Nov 23 02:18:18 localhost NetworkManager[5966]: [1763882298.5602] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/8) Nov 23 02:18:18 localhost systemd-udevd[22732]: Network interface NamePolicy= disabled on kernel command line. Nov 23 02:18:18 localhost NetworkManager[5966]: [1763882298.5845] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22742 uid=0 result="success" Nov 23 02:18:18 localhost NetworkManager[5966]: [1763882298.6041] device (vlan21): carrier: link connected Nov 23 02:18:19 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Nov 23 02:18:21 localhost NetworkManager[5966]: [1763882301.6564] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22772 uid=0 result="success" Nov 23 02:18:21 localhost NetworkManager[5966]: [1763882301.7026] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22787 uid=0 result="success" Nov 23 02:18:21 localhost NetworkManager[5966]: [1763882301.7613] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22808 uid=0 result="success" Nov 23 02:18:21 localhost ifup[22809]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 23 02:18:21 localhost ifup[22810]: 'network-scripts' will be removed from distribution in near future. Nov 23 02:18:21 localhost ifup[22811]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 23 02:18:21 localhost NetworkManager[5966]: [1763882301.7916] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22817 uid=0 result="success" Nov 23 02:18:21 localhost ovs-vsctl[22820]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Nov 23 02:18:21 localhost systemd-udevd[22822]: Network interface NamePolicy= disabled on kernel command line. Nov 23 02:18:21 localhost kernel: device vlan44 entered promiscuous mode Nov 23 02:18:21 localhost NetworkManager[5966]: [1763882301.8330] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/9) Nov 23 02:18:21 localhost NetworkManager[5966]: [1763882301.8584] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22832 uid=0 result="success" Nov 23 02:18:21 localhost NetworkManager[5966]: [1763882301.8777] device (vlan44): carrier: link connected Nov 23 02:18:24 localhost NetworkManager[5966]: [1763882304.9498] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22862 uid=0 result="success" Nov 23 02:18:24 localhost NetworkManager[5966]: [1763882304.9954] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22877 uid=0 result="success" Nov 23 02:18:25 localhost NetworkManager[5966]: [1763882305.0534] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22898 uid=0 result="success" Nov 23 02:18:25 localhost ifup[22899]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 23 02:18:25 localhost ifup[22900]: 'network-scripts' will be removed from distribution in near future. Nov 23 02:18:25 localhost ifup[22901]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 23 02:18:25 localhost NetworkManager[5966]: [1763882305.0849] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22907 uid=0 result="success" Nov 23 02:18:25 localhost ovs-vsctl[22910]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Nov 23 02:18:25 localhost kernel: device vlan20 entered promiscuous mode Nov 23 02:18:25 localhost NetworkManager[5966]: [1763882305.1213] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/10) Nov 23 02:18:25 localhost systemd-udevd[22912]: Network interface NamePolicy= disabled on kernel command line. Nov 23 02:18:25 localhost NetworkManager[5966]: [1763882305.1463] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22922 uid=0 result="success" Nov 23 02:18:25 localhost NetworkManager[5966]: [1763882305.1668] device (vlan20): carrier: link connected Nov 23 02:18:28 localhost NetworkManager[5966]: [1763882308.2141] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22952 uid=0 result="success" Nov 23 02:18:28 localhost NetworkManager[5966]: [1763882308.2571] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22967 uid=0 result="success" Nov 23 02:18:28 localhost NetworkManager[5966]: [1763882308.3131] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22988 uid=0 result="success" Nov 23 02:18:28 localhost ifup[22989]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 23 02:18:28 localhost ifup[22990]: 'network-scripts' will be removed from distribution in near future. Nov 23 02:18:28 localhost ifup[22991]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 23 02:18:28 localhost NetworkManager[5966]: [1763882308.3430] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22997 uid=0 result="success" Nov 23 02:18:28 localhost ovs-vsctl[23000]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Nov 23 02:18:28 localhost NetworkManager[5966]: [1763882308.4369] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23007 uid=0 result="success" Nov 23 02:18:29 localhost NetworkManager[5966]: [1763882309.4890] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23034 uid=0 result="success" Nov 23 02:18:29 localhost NetworkManager[5966]: [1763882309.5345] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23049 uid=0 result="success" Nov 23 02:18:29 localhost NetworkManager[5966]: [1763882309.5932] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23070 uid=0 result="success" Nov 23 02:18:29 localhost ifup[23071]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 23 02:18:29 localhost ifup[23072]: 'network-scripts' will be removed from distribution in near future. Nov 23 02:18:29 localhost ifup[23073]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 23 02:18:29 localhost NetworkManager[5966]: [1763882309.6252] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23079 uid=0 result="success" Nov 23 02:18:29 localhost ovs-vsctl[23082]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Nov 23 02:18:29 localhost NetworkManager[5966]: [1763882309.6827] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23089 uid=0 result="success" Nov 23 02:18:30 localhost NetworkManager[5966]: [1763882310.7395] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23117 uid=0 result="success" Nov 23 02:18:30 localhost NetworkManager[5966]: [1763882310.7825] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23132 uid=0 result="success" Nov 23 02:18:30 localhost NetworkManager[5966]: [1763882310.8382] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23153 uid=0 result="success" Nov 23 02:18:30 localhost ifup[23154]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 23 02:18:30 localhost ifup[23155]: 'network-scripts' will be removed from distribution in near future. Nov 23 02:18:30 localhost ifup[23156]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 23 02:18:30 localhost NetworkManager[5966]: [1763882310.8702] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23162 uid=0 result="success" Nov 23 02:18:30 localhost ovs-vsctl[23165]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Nov 23 02:18:30 localhost NetworkManager[5966]: [1763882310.9202] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23172 uid=0 result="success" Nov 23 02:18:31 localhost NetworkManager[5966]: [1763882311.9733] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23200 uid=0 result="success" Nov 23 02:18:32 localhost NetworkManager[5966]: [1763882312.0222] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23215 uid=0 result="success" Nov 23 02:18:32 localhost NetworkManager[5966]: [1763882312.0807] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23236 uid=0 result="success" Nov 23 02:18:32 localhost ifup[23237]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 23 02:18:32 localhost ifup[23238]: 'network-scripts' will be removed from distribution in near future. Nov 23 02:18:32 localhost ifup[23239]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 23 02:18:32 localhost NetworkManager[5966]: [1763882312.1132] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23245 uid=0 result="success" Nov 23 02:18:32 localhost ovs-vsctl[23248]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Nov 23 02:18:32 localhost NetworkManager[5966]: [1763882312.1697] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23255 uid=0 result="success" Nov 23 02:18:33 localhost NetworkManager[5966]: [1763882313.2298] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23283 uid=0 result="success" Nov 23 02:18:33 localhost NetworkManager[5966]: [1763882313.2752] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23298 uid=0 result="success" Nov 23 02:18:33 localhost NetworkManager[5966]: [1763882313.3208] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23319 uid=0 result="success" Nov 23 02:18:33 localhost ifup[23320]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 23 02:18:33 localhost ifup[23321]: 'network-scripts' will be removed from distribution in near future. Nov 23 02:18:33 localhost ifup[23322]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 23 02:18:33 localhost NetworkManager[5966]: [1763882313.3429] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23328 uid=0 result="success" Nov 23 02:18:33 localhost ovs-vsctl[23331]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Nov 23 02:18:33 localhost NetworkManager[5966]: [1763882313.4152] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23338 uid=0 result="success" Nov 23 02:18:34 localhost NetworkManager[5966]: [1763882314.4720] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23366 uid=0 result="success" Nov 23 02:18:34 localhost NetworkManager[5966]: [1763882314.5181] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23381 uid=0 result="success" Nov 23 02:19:01 localhost systemd[1]: Starting dnf makecache... Nov 23 02:19:02 localhost dnf[23399]: Updating Subscription Management repositories. Nov 23 02:19:03 localhost dnf[23399]: Failed determining last makecache time. Nov 23 02:19:03 localhost dnf[23399]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 31 kB/s | 4.5 kB 00:00 Nov 23 02:19:04 localhost dnf[23399]: Fast Datapath for RHEL 9 x86_64 (RPMs) 47 kB/s | 4.0 kB 00:00 Nov 23 02:19:04 localhost dnf[23399]: Red Hat OpenStack Platform 17.1 for RHEL 9 x86_ 46 kB/s | 4.0 kB 00:00 Nov 23 02:19:04 localhost dnf[23399]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 22 kB/s | 4.1 kB 00:00 Nov 23 02:19:04 localhost dnf[23399]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 44 kB/s | 4.1 kB 00:00 Nov 23 02:19:04 localhost dnf[23399]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 39 kB/s | 4.5 kB 00:00 Nov 23 02:19:04 localhost dnf[23399]: Red Hat Enterprise Linux 9 for x86_64 - High Av 32 kB/s | 4.0 kB 00:00 Nov 23 02:19:05 localhost dnf[23399]: Metadata cache created. Nov 23 02:19:05 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Nov 23 02:19:05 localhost systemd[1]: Finished dnf makecache. Nov 23 02:19:05 localhost systemd[1]: dnf-makecache.service: Consumed 2.712s CPU time. Nov 23 02:19:27 localhost python3[23422]: ansible-ansible.legacy.command Invoked with _raw_params=ip a#012ping -c 2 -W 2 192.168.122.10#012ping -c 2 -W 2 192.168.122.11#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1b67-9ac8-00000000001b-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:19:33 localhost python3[23441]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUOGMsJ/AVVgY0xZSxHG1Oo8GuV12NJwOZUmjQnvkVRDmS001A30AXXR0N8dst9ZQnVZ7t0kBrbCnuEI8SNNpeziiyScPCPK73L2zyV8Js+qKXswkPpolOCRy92kOph1OZYuXdhUodUSdBoJ4mwf2s5nJhAJmH6XvfiqHUaCRd9Gp9NgU9FvjG41eO7BwkjRpKTg2jZAy21PGLDeWxRI5qxEpDgdXeW0riuuVHj1FGKKfC1wAe7xB5wykXcRkuog4VlSx2/V+mPpSMDZ1POsAxKOAMYkxfj+qoDIBfDc0R1cbxFehgmCHc8a4z+IjP5eiUvX3HjeV7ZBTR5hkYKHAJfeU6Cj5HQsTwwJrc+oHuosokgJ/ct0+WpvqhalUoL8dpoLUY6PQq+5CeOJrpZeLzXZTIPLWTA4jbbkHa/SwmAk07+hpxpFz3NhSfpT4GfOgKnowPfo+3mJMDAetTMZpizTdfPfc13gl7Zyqb9cB8lgx1IVzN6ZrxPyvPqj05uPk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 02:19:33 localhost python3[23457]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUOGMsJ/AVVgY0xZSxHG1Oo8GuV12NJwOZUmjQnvkVRDmS001A30AXXR0N8dst9ZQnVZ7t0kBrbCnuEI8SNNpeziiyScPCPK73L2zyV8Js+qKXswkPpolOCRy92kOph1OZYuXdhUodUSdBoJ4mwf2s5nJhAJmH6XvfiqHUaCRd9Gp9NgU9FvjG41eO7BwkjRpKTg2jZAy21PGLDeWxRI5qxEpDgdXeW0riuuVHj1FGKKfC1wAe7xB5wykXcRkuog4VlSx2/V+mPpSMDZ1POsAxKOAMYkxfj+qoDIBfDc0R1cbxFehgmCHc8a4z+IjP5eiUvX3HjeV7ZBTR5hkYKHAJfeU6Cj5HQsTwwJrc+oHuosokgJ/ct0+WpvqhalUoL8dpoLUY6PQq+5CeOJrpZeLzXZTIPLWTA4jbbkHa/SwmAk07+hpxpFz3NhSfpT4GfOgKnowPfo+3mJMDAetTMZpizTdfPfc13gl7Zyqb9cB8lgx1IVzN6ZrxPyvPqj05uPk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 02:19:35 localhost python3[23471]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUOGMsJ/AVVgY0xZSxHG1Oo8GuV12NJwOZUmjQnvkVRDmS001A30AXXR0N8dst9ZQnVZ7t0kBrbCnuEI8SNNpeziiyScPCPK73L2zyV8Js+qKXswkPpolOCRy92kOph1OZYuXdhUodUSdBoJ4mwf2s5nJhAJmH6XvfiqHUaCRd9Gp9NgU9FvjG41eO7BwkjRpKTg2jZAy21PGLDeWxRI5qxEpDgdXeW0riuuVHj1FGKKfC1wAe7xB5wykXcRkuog4VlSx2/V+mPpSMDZ1POsAxKOAMYkxfj+qoDIBfDc0R1cbxFehgmCHc8a4z+IjP5eiUvX3HjeV7ZBTR5hkYKHAJfeU6Cj5HQsTwwJrc+oHuosokgJ/ct0+WpvqhalUoL8dpoLUY6PQq+5CeOJrpZeLzXZTIPLWTA4jbbkHa/SwmAk07+hpxpFz3NhSfpT4GfOgKnowPfo+3mJMDAetTMZpizTdfPfc13gl7Zyqb9cB8lgx1IVzN6ZrxPyvPqj05uPk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 02:19:35 localhost python3[23487]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUOGMsJ/AVVgY0xZSxHG1Oo8GuV12NJwOZUmjQnvkVRDmS001A30AXXR0N8dst9ZQnVZ7t0kBrbCnuEI8SNNpeziiyScPCPK73L2zyV8Js+qKXswkPpolOCRy92kOph1OZYuXdhUodUSdBoJ4mwf2s5nJhAJmH6XvfiqHUaCRd9Gp9NgU9FvjG41eO7BwkjRpKTg2jZAy21PGLDeWxRI5qxEpDgdXeW0riuuVHj1FGKKfC1wAe7xB5wykXcRkuog4VlSx2/V+mPpSMDZ1POsAxKOAMYkxfj+qoDIBfDc0R1cbxFehgmCHc8a4z+IjP5eiUvX3HjeV7ZBTR5hkYKHAJfeU6Cj5HQsTwwJrc+oHuosokgJ/ct0+WpvqhalUoL8dpoLUY6PQq+5CeOJrpZeLzXZTIPLWTA4jbbkHa/SwmAk07+hpxpFz3NhSfpT4GfOgKnowPfo+3mJMDAetTMZpizTdfPfc13gl7Zyqb9cB8lgx1IVzN6ZrxPyvPqj05uPk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 23 02:19:36 localhost python3[23501]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname Nov 23 02:19:37 localhost python3[23516]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005532584.novalocal"#012hostname_str_array=(${hostname//./ })#012echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1b67-9ac8-000000000022-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:19:38 localhost python3[23536]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)#012hostnamectl hostname "$hostname.localdomain"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1b67-9ac8-000000000023-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:19:38 localhost systemd[1]: Starting Hostname Service... Nov 23 02:19:38 localhost systemd[1]: Started Hostname Service. Nov 23 02:19:38 localhost systemd-hostnamed[23540]: Hostname set to (static) Nov 23 02:19:38 localhost NetworkManager[5966]: [1763882378.5818] hostname: static hostname changed from "np0005532584.novalocal" to "np0005532584.localdomain" Nov 23 02:19:38 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Nov 23 02:19:38 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Nov 23 02:19:40 localhost systemd[1]: session-10.scope: Deactivated successfully. Nov 23 02:19:40 localhost systemd[1]: session-10.scope: Consumed 1min 44.312s CPU time. Nov 23 02:19:40 localhost systemd-logind[760]: Session 10 logged out. Waiting for processes to exit. Nov 23 02:19:40 localhost systemd-logind[760]: Removed session 10. Nov 23 02:19:42 localhost sshd[23551]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:19:42 localhost systemd-logind[760]: New session 11 of user zuul. Nov 23 02:19:42 localhost systemd[1]: Started Session 11 of User zuul. Nov 23 02:19:42 localhost python3[23568]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Nov 23 02:19:45 localhost systemd[1]: session-11.scope: Deactivated successfully. Nov 23 02:19:45 localhost systemd-logind[760]: Session 11 logged out. Waiting for processes to exit. Nov 23 02:19:45 localhost systemd-logind[760]: Removed session 11. Nov 23 02:19:48 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Nov 23 02:20:08 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 23 02:20:25 localhost sshd[23573]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:20:25 localhost systemd-logind[760]: New session 12 of user zuul. Nov 23 02:20:25 localhost systemd[1]: Started Session 12 of User zuul. Nov 23 02:20:26 localhost python3[23592]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 02:20:29 localhost systemd[1]: Reloading. Nov 23 02:20:29 localhost systemd-sysv-generator[23634]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:20:29 localhost systemd-rc-local-generator[23631]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:20:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:20:29 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs. Nov 23 02:20:29 localhost systemd[1]: Reloading. Nov 23 02:20:30 localhost systemd-rc-local-generator[23675]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:20:30 localhost systemd-sysv-generator[23679]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:20:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:20:30 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Nov 23 02:20:30 localhost systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Nov 23 02:20:30 localhost systemd[1]: Reloading. Nov 23 02:20:30 localhost systemd-sysv-generator[23719]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:20:30 localhost systemd-rc-local-generator[23716]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:20:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:20:30 localhost systemd[1]: Listening on LVM2 poll daemon socket. Nov 23 02:20:30 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 02:20:30 localhost systemd[1]: Starting man-db-cache-update.service... Nov 23 02:20:30 localhost systemd[1]: Reloading. Nov 23 02:20:30 localhost systemd-rc-local-generator[23776]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:20:30 localhost systemd-sysv-generator[23780]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:20:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:20:30 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 23 02:20:30 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 02:20:31 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 23 02:20:31 localhost systemd[1]: Finished man-db-cache-update.service. Nov 23 02:20:31 localhost systemd[1]: run-r64da818382bd4a8b867e546d82a226f4.service: Deactivated successfully. Nov 23 02:20:31 localhost systemd[1]: run-rfb05b74bff734ce7ac195635e2b1c991.service: Deactivated successfully. Nov 23 02:21:31 localhost systemd[1]: session-12.scope: Deactivated successfully. Nov 23 02:21:31 localhost systemd[1]: session-12.scope: Consumed 4.659s CPU time. Nov 23 02:21:31 localhost systemd-logind[760]: Session 12 logged out. Waiting for processes to exit. Nov 23 02:21:31 localhost systemd-logind[760]: Removed session 12. Nov 23 02:22:23 localhost sshd[24365]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:29:44 localhost sshd[24371]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:33:54 localhost sshd[24375]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:37:15 localhost sshd[24380]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:37:15 localhost systemd-logind[760]: New session 13 of user zuul. Nov 23 02:37:15 localhost systemd[1]: Started Session 13 of User zuul. Nov 23 02:37:15 localhost python3[24428]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 02:37:17 localhost python3[24515]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 02:37:20 localhost python3[24532]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:37:21 localhost python3[24548]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:37:21 localhost kernel: loop: module loaded Nov 23 02:37:21 localhost kernel: loop3: detected capacity change from 0 to 14680064 Nov 23 02:37:21 localhost python3[24573]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:37:21 localhost lvm[24576]: PV /dev/loop3 not used. Nov 23 02:37:21 localhost lvm[24578]: PV /dev/loop3 online, VG ceph_vg0 is complete. Nov 23 02:37:21 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0. Nov 23 02:37:21 localhost lvm[24585]: 1 logical volume(s) in volume group "ceph_vg0" now active Nov 23 02:37:21 localhost lvm[24588]: PV /dev/loop3 online, VG ceph_vg0 is complete. Nov 23 02:37:21 localhost lvm[24588]: VG ceph_vg0 finished Nov 23 02:37:21 localhost systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully. Nov 23 02:37:22 localhost python3[24637]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:37:23 localhost python3[24680]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763883442.2737746-55244-54060352373083/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:37:23 localhost python3[24710]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 02:37:23 localhost systemd[1]: Reloading. Nov 23 02:37:24 localhost systemd-sysv-generator[24746]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:37:24 localhost systemd-rc-local-generator[24742]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:37:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:37:24 localhost systemd[1]: Starting Ceph OSD losetup... Nov 23 02:37:24 localhost bash[24752]: /dev/loop3: [64516]:8399529 (/var/lib/ceph-osd-0.img) Nov 23 02:37:24 localhost systemd[1]: Finished Ceph OSD losetup. Nov 23 02:37:24 localhost lvm[24753]: PV /dev/loop3 online, VG ceph_vg0 is complete. Nov 23 02:37:24 localhost lvm[24753]: VG ceph_vg0 finished Nov 23 02:37:24 localhost python3[24769]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 02:37:27 localhost python3[24786]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:37:28 localhost python3[24802]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:37:28 localhost kernel: loop4: detected capacity change from 0 to 14680064 Nov 23 02:37:28 localhost python3[24824]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:37:28 localhost lvm[24827]: PV /dev/loop4 not used. Nov 23 02:37:29 localhost lvm[24837]: PV /dev/loop4 online, VG ceph_vg1 is complete. Nov 23 02:37:29 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1. Nov 23 02:37:29 localhost lvm[24839]: 1 logical volume(s) in volume group "ceph_vg1" now active Nov 23 02:37:29 localhost systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully. Nov 23 02:37:29 localhost python3[24887]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:37:29 localhost python3[24930]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763883449.306775-55448-168614390114716/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:37:30 localhost python3[24960]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 02:37:30 localhost systemd[1]: Reloading. Nov 23 02:37:30 localhost systemd-rc-local-generator[24984]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:37:30 localhost systemd-sysv-generator[24988]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:37:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:37:30 localhost systemd[1]: Starting Ceph OSD losetup... Nov 23 02:37:30 localhost bash[25001]: /dev/loop4: [64516]:9173734 (/var/lib/ceph-osd-1.img) Nov 23 02:37:30 localhost systemd[1]: Finished Ceph OSD losetup. Nov 23 02:37:30 localhost lvm[25002]: PV /dev/loop4 online, VG ceph_vg1 is complete. Nov 23 02:37:30 localhost lvm[25002]: VG ceph_vg1 finished Nov 23 02:37:40 localhost python3[25047]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Nov 23 02:37:41 localhost python3[25067]: ansible-hostname Invoked with name=np0005532584.localdomain use=None Nov 23 02:37:41 localhost systemd[1]: Starting Hostname Service... Nov 23 02:37:42 localhost systemd[1]: Started Hostname Service. Nov 23 02:37:44 localhost python3[25090]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Nov 23 02:37:44 localhost python3[25138]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.oqvifw00tmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:37:45 localhost python3[25168]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.oqvifw00tmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:37:45 localhost python3[25184]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.oqvifw00tmphosts insertbefore=BOF block=192.168.122.106 np0005532584.localdomain np0005532584#012192.168.122.106 np0005532584.ctlplane.localdomain np0005532584.ctlplane#012192.168.122.107 np0005532585.localdomain np0005532585#012192.168.122.107 np0005532585.ctlplane.localdomain np0005532585.ctlplane#012192.168.122.108 np0005532586.localdomain np0005532586#012192.168.122.108 np0005532586.ctlplane.localdomain np0005532586.ctlplane#012192.168.122.103 np0005532581.localdomain np0005532581#012192.168.122.103 np0005532581.ctlplane.localdomain np0005532581.ctlplane#012192.168.122.104 np0005532582.localdomain np0005532582#012192.168.122.104 np0005532582.ctlplane.localdomain np0005532582.ctlplane#012192.168.122.105 np0005532583.localdomain np0005532583#012192.168.122.105 np0005532583.ctlplane.localdomain np0005532583.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:37:46 localhost python3[25200]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.oqvifw00tmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:37:46 localhost python3[25217]: ansible-file Invoked with path=/tmp/ansible.oqvifw00tmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:37:48 localhost python3[25233]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:37:50 localhost python3[25251]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 02:37:54 localhost python3[25300]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:37:54 localhost python3[25345]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763883474.0371408-56295-131074630751902/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:37:56 localhost python3[25375]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 02:37:56 localhost python3[25393]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 02:37:56 localhost systemd[1]: Stopping NTP client/server... Nov 23 02:37:56 localhost chronyd[766]: chronyd exiting Nov 23 02:37:56 localhost systemd[1]: chronyd.service: Deactivated successfully. Nov 23 02:37:56 localhost systemd[1]: Stopped NTP client/server. Nov 23 02:37:56 localhost systemd[1]: chronyd.service: Consumed 81ms CPU time, read 1.9M from disk, written 0B to disk. Nov 23 02:37:56 localhost systemd[1]: Starting NTP client/server... Nov 23 02:37:56 localhost chronyd[25400]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Nov 23 02:37:56 localhost chronyd[25400]: Frequency -30.710 +/- 0.114 ppm read from /var/lib/chrony/drift Nov 23 02:37:56 localhost chronyd[25400]: Loaded seccomp filter (level 2) Nov 23 02:37:56 localhost systemd[1]: Started NTP client/server. Nov 23 02:37:57 localhost python3[25449]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:37:58 localhost python3[25492]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763883477.4148452-56484-182012679608223/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:37:58 localhost python3[25522]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 02:37:58 localhost systemd[1]: Reloading. Nov 23 02:37:58 localhost systemd-rc-local-generator[25544]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:37:58 localhost systemd-sysv-generator[25547]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:37:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:37:58 localhost systemd[1]: Reloading. Nov 23 02:37:58 localhost systemd-rc-local-generator[25586]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:37:58 localhost systemd-sysv-generator[25592]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:37:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:37:59 localhost systemd[1]: Starting chronyd online sources service... Nov 23 02:37:59 localhost chronyc[25599]: 200 OK Nov 23 02:37:59 localhost systemd[1]: chrony-online.service: Deactivated successfully. Nov 23 02:37:59 localhost systemd[1]: Finished chronyd online sources service. Nov 23 02:37:59 localhost python3[25615]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:37:59 localhost chronyd[25400]: System clock was stepped by -0.000000 seconds Nov 23 02:38:00 localhost python3[25632]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:38:01 localhost chronyd[25400]: Selected source 198.50.174.203 (pool.ntp.org) Nov 23 02:38:10 localhost python3[25649]: ansible-timezone Invoked with name=UTC hwclock=None Nov 23 02:38:10 localhost systemd[1]: Starting Time & Date Service... Nov 23 02:38:10 localhost systemd[1]: Started Time & Date Service. Nov 23 02:38:12 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 23 02:38:12 localhost python3[25672]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 02:38:12 localhost chronyd[25400]: chronyd exiting Nov 23 02:38:12 localhost systemd[1]: Stopping NTP client/server... Nov 23 02:38:12 localhost systemd[1]: chronyd.service: Deactivated successfully. Nov 23 02:38:12 localhost systemd[1]: Stopped NTP client/server. Nov 23 02:38:12 localhost systemd[1]: Starting NTP client/server... Nov 23 02:38:12 localhost chronyd[25679]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Nov 23 02:38:12 localhost chronyd[25679]: Frequency -30.710 +/- 0.121 ppm read from /var/lib/chrony/drift Nov 23 02:38:12 localhost chronyd[25679]: Loaded seccomp filter (level 2) Nov 23 02:38:12 localhost systemd[1]: Started NTP client/server. Nov 23 02:38:17 localhost chronyd[25679]: Selected source 167.160.187.12 (pool.ntp.org) Nov 23 02:38:40 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Nov 23 02:40:09 localhost sshd[25877]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:40:09 localhost systemd[1]: Created slice User Slice of UID 1002. Nov 23 02:40:09 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Nov 23 02:40:09 localhost systemd-logind[760]: New session 14 of user ceph-admin. Nov 23 02:40:09 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Nov 23 02:40:09 localhost systemd[1]: Starting User Manager for UID 1002... Nov 23 02:40:09 localhost sshd[25894]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:40:09 localhost systemd[25881]: Queued start job for default target Main User Target. Nov 23 02:40:09 localhost systemd[25881]: Created slice User Application Slice. Nov 23 02:40:09 localhost systemd[25881]: Started Mark boot as successful after the user session has run 2 minutes. Nov 23 02:40:09 localhost systemd[25881]: Started Daily Cleanup of User's Temporary Directories. Nov 23 02:40:09 localhost systemd[25881]: Reached target Paths. Nov 23 02:40:09 localhost systemd[25881]: Reached target Timers. Nov 23 02:40:09 localhost systemd[25881]: Starting D-Bus User Message Bus Socket... Nov 23 02:40:09 localhost systemd[25881]: Starting Create User's Volatile Files and Directories... Nov 23 02:40:09 localhost systemd[25881]: Listening on D-Bus User Message Bus Socket. Nov 23 02:40:09 localhost systemd[25881]: Finished Create User's Volatile Files and Directories. Nov 23 02:40:09 localhost systemd[25881]: Reached target Sockets. Nov 23 02:40:09 localhost systemd[25881]: Reached target Basic System. Nov 23 02:40:09 localhost systemd[25881]: Reached target Main User Target. Nov 23 02:40:09 localhost systemd[25881]: Startup finished in 117ms. Nov 23 02:40:10 localhost systemd[1]: Started User Manager for UID 1002. Nov 23 02:40:10 localhost systemd[1]: Started Session 14 of User ceph-admin. Nov 23 02:40:10 localhost systemd-logind[760]: New session 16 of user ceph-admin. Nov 23 02:40:10 localhost systemd[1]: Started Session 16 of User ceph-admin. Nov 23 02:40:10 localhost sshd[25916]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:40:10 localhost systemd-logind[760]: New session 17 of user ceph-admin. Nov 23 02:40:10 localhost systemd[1]: Started Session 17 of User ceph-admin. Nov 23 02:40:10 localhost sshd[25935]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:40:10 localhost systemd-logind[760]: New session 18 of user ceph-admin. Nov 23 02:40:10 localhost systemd[1]: Started Session 18 of User ceph-admin. Nov 23 02:40:11 localhost sshd[25954]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:40:11 localhost systemd-logind[760]: New session 19 of user ceph-admin. Nov 23 02:40:11 localhost systemd[1]: Started Session 19 of User ceph-admin. Nov 23 02:40:11 localhost sshd[25973]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:40:11 localhost systemd-logind[760]: New session 20 of user ceph-admin. Nov 23 02:40:11 localhost systemd[1]: Started Session 20 of User ceph-admin. Nov 23 02:40:11 localhost sshd[25992]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:40:11 localhost systemd-logind[760]: New session 21 of user ceph-admin. Nov 23 02:40:11 localhost systemd[1]: Started Session 21 of User ceph-admin. Nov 23 02:40:12 localhost sshd[26011]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:40:12 localhost systemd-logind[760]: New session 22 of user ceph-admin. Nov 23 02:40:12 localhost systemd[1]: Started Session 22 of User ceph-admin. Nov 23 02:40:12 localhost sshd[26030]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:40:12 localhost systemd-logind[760]: New session 23 of user ceph-admin. Nov 23 02:40:12 localhost systemd[1]: Started Session 23 of User ceph-admin. Nov 23 02:40:12 localhost sshd[26049]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:40:12 localhost systemd-logind[760]: New session 24 of user ceph-admin. Nov 23 02:40:12 localhost systemd[1]: Started Session 24 of User ceph-admin. Nov 23 02:40:13 localhost sshd[26066]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:40:13 localhost systemd-logind[760]: New session 25 of user ceph-admin. Nov 23 02:40:13 localhost systemd[1]: Started Session 25 of User ceph-admin. Nov 23 02:40:13 localhost sshd[26085]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:40:13 localhost systemd-logind[760]: New session 26 of user ceph-admin. Nov 23 02:40:13 localhost systemd[1]: Started Session 26 of User ceph-admin. Nov 23 02:40:14 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 23 02:40:21 localhost sshd[26124]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:40:40 localhost sshd[26126]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:40:43 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 23 02:40:43 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 23 02:40:44 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 23 02:40:44 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 23 02:40:44 localhost systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 26302 (sysctl) Nov 23 02:40:44 localhost systemd[1]: Mounting Arbitrary Executable File Formats File System... Nov 23 02:40:44 localhost systemd[1]: Mounted Arbitrary Executable File Formats File System. Nov 23 02:40:45 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 23 02:40:45 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 23 02:40:49 localhost kernel: VFS: idmapped mount is not enabled. Nov 23 02:40:49 localhost sshd[26568]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:40:59 localhost sshd[26569]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:41:09 localhost podman[26440]: Nov 23 02:41:09 localhost podman[26440]: 2025-11-23 07:41:09.962160657 +0000 UTC m=+23.850543326 container create 026eb9c2855f527c10c540ee60994884a62cb3be675138ad11012d24e6e78e2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_kirch, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, ceph=True, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, maintainer=Guillaume Abrioux ) Nov 23 02:41:09 localhost systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck2508715363-merged.mount: Deactivated successfully. Nov 23 02:41:10 localhost systemd[1]: Created slice Slice /machine. Nov 23 02:41:10 localhost systemd[1]: Started libpod-conmon-026eb9c2855f527c10c540ee60994884a62cb3be675138ad11012d24e6e78e2c.scope. Nov 23 02:41:10 localhost podman[26440]: 2025-11-23 07:40:46.158136955 +0000 UTC m=+0.046519674 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:41:10 localhost systemd[1]: Started libcrun container. Nov 23 02:41:10 localhost podman[26440]: 2025-11-23 07:41:10.057828583 +0000 UTC m=+23.946211252 container init 026eb9c2855f527c10c540ee60994884a62cb3be675138ad11012d24e6e78e2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_kirch, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, vcs-type=git, release=553, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, name=rhceph, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, ceph=True) Nov 23 02:41:10 localhost podman[26440]: 2025-11-23 07:41:10.069239916 +0000 UTC m=+23.957622605 container start 026eb9c2855f527c10c540ee60994884a62cb3be675138ad11012d24e6e78e2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_kirch, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, release=553, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, architecture=x86_64, io.openshift.expose-services=, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=) Nov 23 02:41:10 localhost podman[26440]: 2025-11-23 07:41:10.07096805 +0000 UTC m=+23.959350799 container attach 026eb9c2855f527c10c540ee60994884a62cb3be675138ad11012d24e6e78e2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_kirch, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, release=553, architecture=x86_64, vcs-type=git, version=7, RELEASE=main, GIT_BRANCH=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 02:41:10 localhost vibrant_kirch[26671]: 167 167 Nov 23 02:41:10 localhost systemd[1]: libpod-026eb9c2855f527c10c540ee60994884a62cb3be675138ad11012d24e6e78e2c.scope: Deactivated successfully. Nov 23 02:41:10 localhost podman[26440]: 2025-11-23 07:41:10.072869958 +0000 UTC m=+23.961252687 container died 026eb9c2855f527c10c540ee60994884a62cb3be675138ad11012d24e6e78e2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_kirch, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc.) Nov 23 02:41:10 localhost podman[26676]: 2025-11-23 07:41:10.158470002 +0000 UTC m=+0.074582923 container remove 026eb9c2855f527c10c540ee60994884a62cb3be675138ad11012d24e6e78e2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_kirch, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, ceph=True, vcs-type=git, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 23 02:41:10 localhost systemd[1]: libpod-conmon-026eb9c2855f527c10c540ee60994884a62cb3be675138ad11012d24e6e78e2c.scope: Deactivated successfully. Nov 23 02:41:10 localhost podman[26699]: Nov 23 02:41:10 localhost podman[26699]: 2025-11-23 07:41:10.357164852 +0000 UTC m=+0.045221573 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:41:10 localhost systemd[1]: var-lib-containers-storage-overlay-6247de7cb8d43abf8e599b8d2ab03661753d07f042b6f505d294d2c8029912a8-merged.mount: Deactivated successfully. Nov 23 02:41:13 localhost podman[26699]: 2025-11-23 07:41:13.441815751 +0000 UTC m=+3.129872432 container create 782963f9da146f875db53fc1250e8687a18b990506e46370462b224558ed8801 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_banzai, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, version=7, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, vcs-type=git, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , architecture=x86_64) Nov 23 02:41:13 localhost systemd[1]: Started libpod-conmon-782963f9da146f875db53fc1250e8687a18b990506e46370462b224558ed8801.scope. Nov 23 02:41:13 localhost systemd[1]: Started libcrun container. Nov 23 02:41:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd71d02fbd58f9ed128854527411528554297d55d83f0cccc224920aef9b780e/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd71d02fbd58f9ed128854527411528554297d55d83f0cccc224920aef9b780e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:13 localhost podman[26699]: 2025-11-23 07:41:13.548082556 +0000 UTC m=+3.236139237 container init 782963f9da146f875db53fc1250e8687a18b990506e46370462b224558ed8801 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_banzai, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=rhceph-container, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, CEPH_POINT_RELEASE=, name=rhceph, build-date=2025-09-24T08:57:55, release=553, version=7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph) Nov 23 02:41:13 localhost podman[26699]: 2025-11-23 07:41:13.558143058 +0000 UTC m=+3.246199749 container start 782963f9da146f875db53fc1250e8687a18b990506e46370462b224558ed8801 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_banzai, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , RELEASE=main, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, vcs-type=git, ceph=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, release=553, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2025-09-24T08:57:55) Nov 23 02:41:13 localhost podman[26699]: 2025-11-23 07:41:13.558391195 +0000 UTC m=+3.246447876 container attach 782963f9da146f875db53fc1250e8687a18b990506e46370462b224558ed8801 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_banzai, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, ceph=True) Nov 23 02:41:14 localhost angry_banzai[26970]: [ Nov 23 02:41:14 localhost angry_banzai[26970]: { Nov 23 02:41:14 localhost angry_banzai[26970]: "available": false, Nov 23 02:41:14 localhost angry_banzai[26970]: "ceph_device": false, Nov 23 02:41:14 localhost angry_banzai[26970]: "device_id": "QEMU_DVD-ROM_QM00001", Nov 23 02:41:14 localhost angry_banzai[26970]: "lsm_data": {}, Nov 23 02:41:14 localhost angry_banzai[26970]: "lvs": [], Nov 23 02:41:14 localhost angry_banzai[26970]: "path": "/dev/sr0", Nov 23 02:41:14 localhost angry_banzai[26970]: "rejected_reasons": [ Nov 23 02:41:14 localhost angry_banzai[26970]: "Has a FileSystem", Nov 23 02:41:14 localhost angry_banzai[26970]: "Insufficient space (<5GB)" Nov 23 02:41:14 localhost angry_banzai[26970]: ], Nov 23 02:41:14 localhost angry_banzai[26970]: "sys_api": { Nov 23 02:41:14 localhost angry_banzai[26970]: "actuators": null, Nov 23 02:41:14 localhost angry_banzai[26970]: "device_nodes": "sr0", Nov 23 02:41:14 localhost angry_banzai[26970]: "human_readable_size": "482.00 KB", Nov 23 02:41:14 localhost angry_banzai[26970]: "id_bus": "ata", Nov 23 02:41:14 localhost angry_banzai[26970]: "model": "QEMU DVD-ROM", Nov 23 02:41:14 localhost angry_banzai[26970]: "nr_requests": "2", Nov 23 02:41:14 localhost angry_banzai[26970]: "partitions": {}, Nov 23 02:41:14 localhost angry_banzai[26970]: "path": "/dev/sr0", Nov 23 02:41:14 localhost angry_banzai[26970]: "removable": "1", Nov 23 02:41:14 localhost angry_banzai[26970]: "rev": "2.5+", Nov 23 02:41:14 localhost angry_banzai[26970]: "ro": "0", Nov 23 02:41:14 localhost angry_banzai[26970]: "rotational": "1", Nov 23 02:41:14 localhost angry_banzai[26970]: "sas_address": "", Nov 23 02:41:14 localhost angry_banzai[26970]: "sas_device_handle": "", Nov 23 02:41:14 localhost angry_banzai[26970]: "scheduler_mode": "mq-deadline", Nov 23 02:41:14 localhost angry_banzai[26970]: "sectors": 0, Nov 23 02:41:14 localhost angry_banzai[26970]: "sectorsize": "2048", Nov 23 02:41:14 localhost angry_banzai[26970]: "size": 493568.0, Nov 23 02:41:14 localhost angry_banzai[26970]: "support_discard": "0", Nov 23 02:41:14 localhost angry_banzai[26970]: "type": "disk", Nov 23 02:41:14 localhost angry_banzai[26970]: "vendor": "QEMU" Nov 23 02:41:14 localhost angry_banzai[26970]: } Nov 23 02:41:14 localhost angry_banzai[26970]: } Nov 23 02:41:14 localhost angry_banzai[26970]: ] Nov 23 02:41:14 localhost systemd[1]: libpod-782963f9da146f875db53fc1250e8687a18b990506e46370462b224558ed8801.scope: Deactivated successfully. Nov 23 02:41:14 localhost podman[28356]: 2025-11-23 07:41:14.563927206 +0000 UTC m=+0.037772133 container died 782963f9da146f875db53fc1250e8687a18b990506e46370462b224558ed8801 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_banzai, release=553, name=rhceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, ceph=True, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 02:41:14 localhost systemd[1]: var-lib-containers-storage-overlay-cd71d02fbd58f9ed128854527411528554297d55d83f0cccc224920aef9b780e-merged.mount: Deactivated successfully. Nov 23 02:41:14 localhost podman[28356]: 2025-11-23 07:41:14.598098435 +0000 UTC m=+0.071943342 container remove 782963f9da146f875db53fc1250e8687a18b990506e46370462b224558ed8801 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_banzai, name=rhceph, architecture=x86_64, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.buildah.version=1.33.12, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container) Nov 23 02:41:14 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 23 02:41:14 localhost systemd[1]: libpod-conmon-782963f9da146f875db53fc1250e8687a18b990506e46370462b224558ed8801.scope: Deactivated successfully. Nov 23 02:41:15 localhost systemd[1]: systemd-coredump.socket: Deactivated successfully. Nov 23 02:41:15 localhost systemd[1]: Closed Process Core Dump Socket. Nov 23 02:41:15 localhost systemd[1]: Stopping Process Core Dump Socket... Nov 23 02:41:15 localhost systemd[1]: Listening on Process Core Dump Socket. Nov 23 02:41:15 localhost systemd[1]: Reloading. Nov 23 02:41:15 localhost systemd-rc-local-generator[28440]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:41:15 localhost systemd-sysv-generator[28446]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:41:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:41:15 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 23 02:41:15 localhost systemd[1]: Reloading. Nov 23 02:41:15 localhost systemd-sysv-generator[28482]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:41:15 localhost systemd-rc-local-generator[28479]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:41:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:41:35 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 23 02:41:35 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 23 02:41:35 localhost podman[28561]: Nov 23 02:41:35 localhost podman[28561]: 2025-11-23 07:41:35.626252026 +0000 UTC m=+0.070708348 container create af9b6357a199197df0715285a991b92154a89284dd406eba7337eecb62f5782c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_morse, version=7, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, release=553, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main) Nov 23 02:41:35 localhost systemd[1]: Started libpod-conmon-af9b6357a199197df0715285a991b92154a89284dd406eba7337eecb62f5782c.scope. Nov 23 02:41:35 localhost systemd[1]: Started libcrun container. Nov 23 02:41:35 localhost podman[28561]: 2025-11-23 07:41:35.598276649 +0000 UTC m=+0.042732981 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:41:35 localhost podman[28561]: 2025-11-23 07:41:35.701881807 +0000 UTC m=+0.146338139 container init af9b6357a199197df0715285a991b92154a89284dd406eba7337eecb62f5782c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_morse, version=7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.component=rhceph-container, architecture=x86_64, release=553, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 02:41:35 localhost podman[28561]: 2025-11-23 07:41:35.712035855 +0000 UTC m=+0.156492177 container start af9b6357a199197df0715285a991b92154a89284dd406eba7337eecb62f5782c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_morse, maintainer=Guillaume Abrioux , ceph=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.buildah.version=1.33.12, release=553, RELEASE=main, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 02:41:35 localhost podman[28561]: 2025-11-23 07:41:35.712361975 +0000 UTC m=+0.156818317 container attach af9b6357a199197df0715285a991b92154a89284dd406eba7337eecb62f5782c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_morse, RELEASE=main, vcs-type=git, io.openshift.expose-services=, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, name=rhceph, architecture=x86_64, GIT_BRANCH=main, version=7, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 02:41:35 localhost beautiful_morse[28576]: 167 167 Nov 23 02:41:35 localhost systemd[1]: libpod-af9b6357a199197df0715285a991b92154a89284dd406eba7337eecb62f5782c.scope: Deactivated successfully. Nov 23 02:41:35 localhost podman[28561]: 2025-11-23 07:41:35.715853074 +0000 UTC m=+0.160309436 container died af9b6357a199197df0715285a991b92154a89284dd406eba7337eecb62f5782c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_morse, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, ceph=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, version=7, CEPH_POINT_RELEASE=, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vcs-type=git, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux ) Nov 23 02:41:35 localhost podman[28581]: 2025-11-23 07:41:35.800217519 +0000 UTC m=+0.071019547 container remove af9b6357a199197df0715285a991b92154a89284dd406eba7337eecb62f5782c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_morse, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, version=7, architecture=x86_64, io.openshift.expose-services=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main) Nov 23 02:41:35 localhost systemd[1]: libpod-conmon-af9b6357a199197df0715285a991b92154a89284dd406eba7337eecb62f5782c.scope: Deactivated successfully. Nov 23 02:41:35 localhost systemd[1]: Reloading. Nov 23 02:41:35 localhost systemd-rc-local-generator[28619]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:41:35 localhost systemd-sysv-generator[28622]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:41:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:41:36 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 23 02:41:36 localhost systemd[1]: Reloading. Nov 23 02:41:36 localhost systemd-sysv-generator[28662]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:41:36 localhost systemd-rc-local-generator[28657]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:41:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:41:36 localhost systemd[1]: Reached target All Ceph clusters and services. Nov 23 02:41:36 localhost systemd[1]: Reloading. Nov 23 02:41:36 localhost systemd-sysv-generator[28699]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:41:36 localhost systemd-rc-local-generator[28694]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:41:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:41:36 localhost systemd[1]: Reached target Ceph cluster 46550e70-79cb-5f55-bf6d-1204b97e083b. Nov 23 02:41:36 localhost systemd[1]: Reloading. Nov 23 02:41:36 localhost systemd-rc-local-generator[28734]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:41:36 localhost systemd-sysv-generator[28739]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:41:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:41:36 localhost systemd[1]: Reloading. Nov 23 02:41:36 localhost systemd-sysv-generator[28781]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:41:36 localhost systemd-rc-local-generator[28775]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:41:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:41:37 localhost systemd[1]: Created slice Slice /system/ceph-46550e70-79cb-5f55-bf6d-1204b97e083b. Nov 23 02:41:37 localhost systemd[1]: Reached target System Time Set. Nov 23 02:41:37 localhost systemd[1]: Reached target System Time Synchronized. Nov 23 02:41:37 localhost systemd[1]: Starting Ceph crash.np0005532584 for 46550e70-79cb-5f55-bf6d-1204b97e083b... Nov 23 02:41:37 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 23 02:41:37 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 23 02:41:37 localhost podman[28840]: Nov 23 02:41:37 localhost podman[28840]: 2025-11-23 07:41:37.392578388 +0000 UTC m=+0.071872924 container create 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, io.openshift.expose-services=, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, release=553, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux ) Nov 23 02:41:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aa253bea82da65d80f6c1d820b7619163be2bd8aff7eaf8c5596626a70492b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aa253bea82da65d80f6c1d820b7619163be2bd8aff7eaf8c5596626a70492b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:37 localhost podman[28840]: 2025-11-23 07:41:37.36554292 +0000 UTC m=+0.044837506 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:41:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aa253bea82da65d80f6c1d820b7619163be2bd8aff7eaf8c5596626a70492b8/merged/etc/ceph/ceph.client.crash.np0005532584.keyring supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:37 localhost podman[28840]: 2025-11-23 07:41:37.482847638 +0000 UTC m=+0.162142174 container init 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, release=553, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_BRANCH=main, RELEASE=main, name=rhceph, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 02:41:37 localhost podman[28840]: 2025-11-23 07:41:37.493890794 +0000 UTC m=+0.173185320 container start 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, maintainer=Guillaume Abrioux , release=553, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 02:41:37 localhost bash[28840]: 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 Nov 23 02:41:37 localhost systemd[1]: Started Ceph crash.np0005532584 for 46550e70-79cb-5f55-bf6d-1204b97e083b. Nov 23 02:41:37 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584[28855]: INFO:ceph-crash:pinging cluster to exercise our key Nov 23 02:41:37 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584[28855]: 2025-11-23T07:41:37.683+0000 7f2f80695640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Nov 23 02:41:37 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584[28855]: 2025-11-23T07:41:37.683+0000 7f2f80695640 -1 AuthRegistry(0x7f2f78067c70) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Nov 23 02:41:37 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584[28855]: 2025-11-23T07:41:37.687+0000 7f2f80695640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Nov 23 02:41:37 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584[28855]: 2025-11-23T07:41:37.687+0000 7f2f80695640 -1 AuthRegistry(0x7f2f80694000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Nov 23 02:41:37 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584[28855]: 2025-11-23T07:41:37.695+0000 7f2f7dc09640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Nov 23 02:41:37 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584[28855]: 2025-11-23T07:41:37.699+0000 7f2f7e40a640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Nov 23 02:41:37 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584[28855]: 2025-11-23T07:41:37.702+0000 7f2f7ec0b640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Nov 23 02:41:37 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584[28855]: 2025-11-23T07:41:37.702+0000 7f2f80695640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication Nov 23 02:41:37 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584[28855]: [errno 13] RADOS permission denied (error connecting to the cluster) Nov 23 02:41:37 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584[28855]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s Nov 23 02:41:45 localhost podman[28940]: Nov 23 02:41:45 localhost podman[28940]: 2025-11-23 07:41:45.685964395 +0000 UTC m=+0.121768888 container create fd35285e4fd4f01546ef68dc2ff66b1fc67ca63d0de8cce9cd1d5f072805cad7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_volhard, distribution-scope=public, version=7, name=rhceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, vcs-type=git, RELEASE=main, architecture=x86_64, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph) Nov 23 02:41:45 localhost podman[28940]: 2025-11-23 07:41:45.59748612 +0000 UTC m=+0.033290543 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:41:45 localhost systemd[1]: Started libpod-conmon-fd35285e4fd4f01546ef68dc2ff66b1fc67ca63d0de8cce9cd1d5f072805cad7.scope. Nov 23 02:41:45 localhost systemd[1]: Started libcrun container. Nov 23 02:41:45 localhost podman[28940]: 2025-11-23 07:41:45.775247154 +0000 UTC m=+0.211051587 container init fd35285e4fd4f01546ef68dc2ff66b1fc67ca63d0de8cce9cd1d5f072805cad7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_volhard, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_CLEAN=True, distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph) Nov 23 02:41:45 localhost podman[28940]: 2025-11-23 07:41:45.78564512 +0000 UTC m=+0.221449563 container start fd35285e4fd4f01546ef68dc2ff66b1fc67ca63d0de8cce9cd1d5f072805cad7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_volhard, build-date=2025-09-24T08:57:55, version=7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, release=553, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, RELEASE=main, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 02:41:45 localhost podman[28940]: 2025-11-23 07:41:45.786160696 +0000 UTC m=+0.221965129 container attach fd35285e4fd4f01546ef68dc2ff66b1fc67ca63d0de8cce9cd1d5f072805cad7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_volhard, release=553, vcs-type=git, CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, GIT_CLEAN=True, ceph=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, build-date=2025-09-24T08:57:55) Nov 23 02:41:45 localhost inspiring_volhard[28954]: 167 167 Nov 23 02:41:45 localhost systemd[1]: libpod-fd35285e4fd4f01546ef68dc2ff66b1fc67ca63d0de8cce9cd1d5f072805cad7.scope: Deactivated successfully. Nov 23 02:41:45 localhost podman[28940]: 2025-11-23 07:41:45.789765248 +0000 UTC m=+0.225569721 container died fd35285e4fd4f01546ef68dc2ff66b1fc67ca63d0de8cce9cd1d5f072805cad7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_volhard, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=553, vcs-type=git, CEPH_POINT_RELEASE=, RELEASE=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, version=7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_CLEAN=True) Nov 23 02:41:45 localhost podman[28960]: 2025-11-23 07:41:45.88710784 +0000 UTC m=+0.083099386 container remove fd35285e4fd4f01546ef68dc2ff66b1fc67ca63d0de8cce9cd1d5f072805cad7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_volhard, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, GIT_CLEAN=True, vcs-type=git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, CEPH_POINT_RELEASE=) Nov 23 02:41:45 localhost systemd[1]: libpod-conmon-fd35285e4fd4f01546ef68dc2ff66b1fc67ca63d0de8cce9cd1d5f072805cad7.scope: Deactivated successfully. Nov 23 02:41:46 localhost podman[28984]: Nov 23 02:41:46 localhost podman[28984]: 2025-11-23 07:41:46.097236938 +0000 UTC m=+0.071756301 container create 3e13420751ba29146135dddb36ef81e0e34af09317eba653c08ef3b81f8ea4ef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_aryabhata, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, release=553, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, name=rhceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 02:41:46 localhost systemd[1]: Started libpod-conmon-3e13420751ba29146135dddb36ef81e0e34af09317eba653c08ef3b81f8ea4ef.scope. Nov 23 02:41:46 localhost systemd[1]: Started libcrun container. Nov 23 02:41:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53be22846fb5051f4235fbc947bf2681fcccad289db88b2659ad385517ce2f35/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:46 localhost podman[28984]: 2025-11-23 07:41:46.069208179 +0000 UTC m=+0.043727542 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:41:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53be22846fb5051f4235fbc947bf2681fcccad289db88b2659ad385517ce2f35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53be22846fb5051f4235fbc947bf2681fcccad289db88b2659ad385517ce2f35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53be22846fb5051f4235fbc947bf2681fcccad289db88b2659ad385517ce2f35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53be22846fb5051f4235fbc947bf2681fcccad289db88b2659ad385517ce2f35/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:46 localhost podman[28984]: 2025-11-23 07:41:46.22012927 +0000 UTC m=+0.194648643 container init 3e13420751ba29146135dddb36ef81e0e34af09317eba653c08ef3b81f8ea4ef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_aryabhata, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, ceph=True, io.openshift.expose-services=, release=553, distribution-scope=public, architecture=x86_64, version=7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7) Nov 23 02:41:46 localhost podman[28984]: 2025-11-23 07:41:46.231953681 +0000 UTC m=+0.206473004 container start 3e13420751ba29146135dddb36ef81e0e34af09317eba653c08ef3b81f8ea4ef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_aryabhata, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, release=553, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., RELEASE=main, ceph=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.openshift.tags=rhceph ceph) Nov 23 02:41:46 localhost podman[28984]: 2025-11-23 07:41:46.232148437 +0000 UTC m=+0.206667760 container attach 3e13420751ba29146135dddb36ef81e0e34af09317eba653c08ef3b81f8ea4ef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_aryabhata, GIT_CLEAN=True, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, GIT_BRANCH=main, version=7, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, release=553, build-date=2025-09-24T08:57:55, name=rhceph, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 02:41:46 localhost distracted_aryabhata[28999]: --> passed data devices: 0 physical, 2 LVM Nov 23 02:41:46 localhost distracted_aryabhata[28999]: --> relative data size: 1.0 Nov 23 02:41:46 localhost systemd[1]: var-lib-containers-storage-overlay-0428adb1370ee3c2983e0e04c7caf4f619f3642ef4146b486fbcc3137c3d9a7c-merged.mount: Deactivated successfully. Nov 23 02:41:46 localhost distracted_aryabhata[28999]: Running command: /usr/bin/ceph-authtool --gen-print-key Nov 23 02:41:46 localhost distracted_aryabhata[28999]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b8d6ed04-737e-449f-9be4-28f685f59869 Nov 23 02:41:47 localhost distracted_aryabhata[28999]: Running command: /usr/bin/ceph-authtool --gen-print-key Nov 23 02:41:47 localhost lvm[29054]: PV /dev/loop3 online, VG ceph_vg0 is complete. Nov 23 02:41:47 localhost lvm[29054]: VG ceph_vg0 finished Nov 23 02:41:47 localhost distracted_aryabhata[28999]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2 Nov 23 02:41:47 localhost distracted_aryabhata[28999]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0 Nov 23 02:41:47 localhost distracted_aryabhata[28999]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Nov 23 02:41:47 localhost distracted_aryabhata[28999]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block Nov 23 02:41:47 localhost distracted_aryabhata[28999]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap Nov 23 02:41:47 localhost distracted_aryabhata[28999]: stderr: got monmap epoch 3 Nov 23 02:41:47 localhost distracted_aryabhata[28999]: --> Creating keyring file for osd.2 Nov 23 02:41:47 localhost distracted_aryabhata[28999]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring Nov 23 02:41:47 localhost distracted_aryabhata[28999]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/ Nov 23 02:41:47 localhost distracted_aryabhata[28999]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid b8d6ed04-737e-449f-9be4-28f685f59869 --setuser ceph --setgroup ceph Nov 23 02:41:50 localhost distracted_aryabhata[28999]: stderr: 2025-11-23T07:41:47.900+0000 7f37745f0a80 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Nov 23 02:41:50 localhost distracted_aryabhata[28999]: stderr: 2025-11-23T07:41:47.900+0000 7f37745f0a80 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid Nov 23 02:41:50 localhost distracted_aryabhata[28999]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0 Nov 23 02:41:50 localhost distracted_aryabhata[28999]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Nov 23 02:41:50 localhost distracted_aryabhata[28999]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config Nov 23 02:41:50 localhost distracted_aryabhata[28999]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block Nov 23 02:41:50 localhost distracted_aryabhata[28999]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block Nov 23 02:41:50 localhost distracted_aryabhata[28999]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Nov 23 02:41:50 localhost distracted_aryabhata[28999]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Nov 23 02:41:50 localhost distracted_aryabhata[28999]: --> ceph-volume lvm activate successful for osd ID: 2 Nov 23 02:41:50 localhost distracted_aryabhata[28999]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0 Nov 23 02:41:50 localhost distracted_aryabhata[28999]: Running command: /usr/bin/ceph-authtool --gen-print-key Nov 23 02:41:50 localhost distracted_aryabhata[28999]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9cfc5456-16f1-4b7f-b9c4-30f67cbbd45e Nov 23 02:41:51 localhost lvm[29985]: PV /dev/loop4 online, VG ceph_vg1 is complete. Nov 23 02:41:51 localhost lvm[29985]: VG ceph_vg1 finished Nov 23 02:41:51 localhost distracted_aryabhata[28999]: Running command: /usr/bin/ceph-authtool --gen-print-key Nov 23 02:41:51 localhost distracted_aryabhata[28999]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-5 Nov 23 02:41:51 localhost distracted_aryabhata[28999]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1 Nov 23 02:41:51 localhost distracted_aryabhata[28999]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Nov 23 02:41:51 localhost distracted_aryabhata[28999]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-5/block Nov 23 02:41:51 localhost distracted_aryabhata[28999]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-5/activate.monmap Nov 23 02:41:51 localhost distracted_aryabhata[28999]: stderr: got monmap epoch 3 Nov 23 02:41:51 localhost distracted_aryabhata[28999]: --> Creating keyring file for osd.5 Nov 23 02:41:51 localhost distracted_aryabhata[28999]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5/keyring Nov 23 02:41:51 localhost distracted_aryabhata[28999]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5/ Nov 23 02:41:51 localhost distracted_aryabhata[28999]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 5 --monmap /var/lib/ceph/osd/ceph-5/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-5/ --osd-uuid 9cfc5456-16f1-4b7f-b9c4-30f67cbbd45e --setuser ceph --setgroup ceph Nov 23 02:41:54 localhost distracted_aryabhata[28999]: stderr: 2025-11-23T07:41:51.693+0000 7fef97c96a80 -1 bluestore(/var/lib/ceph/osd/ceph-5//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Nov 23 02:41:54 localhost distracted_aryabhata[28999]: stderr: 2025-11-23T07:41:51.693+0000 7fef97c96a80 -1 bluestore(/var/lib/ceph/osd/ceph-5/) _read_fsid unparsable uuid Nov 23 02:41:54 localhost distracted_aryabhata[28999]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1 Nov 23 02:41:54 localhost distracted_aryabhata[28999]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Nov 23 02:41:54 localhost distracted_aryabhata[28999]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-5 --no-mon-config Nov 23 02:41:54 localhost distracted_aryabhata[28999]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-5/block Nov 23 02:41:54 localhost distracted_aryabhata[28999]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block Nov 23 02:41:54 localhost distracted_aryabhata[28999]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Nov 23 02:41:54 localhost distracted_aryabhata[28999]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Nov 23 02:41:54 localhost distracted_aryabhata[28999]: --> ceph-volume lvm activate successful for osd ID: 5 Nov 23 02:41:54 localhost distracted_aryabhata[28999]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1 Nov 23 02:41:54 localhost systemd[1]: libpod-3e13420751ba29146135dddb36ef81e0e34af09317eba653c08ef3b81f8ea4ef.scope: Deactivated successfully. Nov 23 02:41:54 localhost systemd[1]: libpod-3e13420751ba29146135dddb36ef81e0e34af09317eba653c08ef3b81f8ea4ef.scope: Consumed 3.733s CPU time. Nov 23 02:41:54 localhost podman[28984]: 2025-11-23 07:41:54.292456416 +0000 UTC m=+8.266975759 container died 3e13420751ba29146135dddb36ef81e0e34af09317eba653c08ef3b81f8ea4ef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_aryabhata, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, RELEASE=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=) Nov 23 02:41:54 localhost systemd[1]: tmp-crun.b9jnTS.mount: Deactivated successfully. Nov 23 02:41:54 localhost systemd[1]: var-lib-containers-storage-overlay-53be22846fb5051f4235fbc947bf2681fcccad289db88b2659ad385517ce2f35-merged.mount: Deactivated successfully. Nov 23 02:41:54 localhost podman[30884]: 2025-11-23 07:41:54.396136756 +0000 UTC m=+0.091298883 container remove 3e13420751ba29146135dddb36ef81e0e34af09317eba653c08ef3b81f8ea4ef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_aryabhata, com.redhat.component=rhceph-container, name=rhceph, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_BRANCH=main, version=7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=Guillaume Abrioux ) Nov 23 02:41:54 localhost systemd[1]: libpod-conmon-3e13420751ba29146135dddb36ef81e0e34af09317eba653c08ef3b81f8ea4ef.scope: Deactivated successfully. Nov 23 02:41:55 localhost podman[30966]: Nov 23 02:41:55 localhost podman[30966]: 2025-11-23 07:41:55.152977472 +0000 UTC m=+0.078215182 container create deca512c14eab837aa09e43cfd62f6561af16e80e1e82b4f7a749662fb44de41 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_feynman, RELEASE=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, name=rhceph, release=553, vcs-type=git, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, ceph=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Nov 23 02:41:55 localhost systemd[1]: Started libpod-conmon-deca512c14eab837aa09e43cfd62f6561af16e80e1e82b4f7a749662fb44de41.scope. Nov 23 02:41:55 localhost podman[30966]: 2025-11-23 07:41:55.121159705 +0000 UTC m=+0.046397415 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:41:55 localhost systemd[1]: Started libcrun container. Nov 23 02:41:55 localhost podman[30966]: 2025-11-23 07:41:55.277034372 +0000 UTC m=+0.202272082 container init deca512c14eab837aa09e43cfd62f6561af16e80e1e82b4f7a749662fb44de41 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_feynman, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_CLEAN=True, com.redhat.component=rhceph-container, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7) Nov 23 02:41:55 localhost podman[30966]: 2025-11-23 07:41:55.286450426 +0000 UTC m=+0.211688136 container start deca512c14eab837aa09e43cfd62f6561af16e80e1e82b4f7a749662fb44de41 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_feynman, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_CLEAN=True, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public, version=7, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 02:41:55 localhost podman[30966]: 2025-11-23 07:41:55.286740525 +0000 UTC m=+0.211978245 container attach deca512c14eab837aa09e43cfd62f6561af16e80e1e82b4f7a749662fb44de41 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_feynman, vcs-type=git, GIT_BRANCH=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public) Nov 23 02:41:55 localhost sleepy_feynman[30981]: 167 167 Nov 23 02:41:55 localhost systemd[1]: libpod-deca512c14eab837aa09e43cfd62f6561af16e80e1e82b4f7a749662fb44de41.scope: Deactivated successfully. Nov 23 02:41:55 localhost podman[30966]: 2025-11-23 07:41:55.291556917 +0000 UTC m=+0.216794657 container died deca512c14eab837aa09e43cfd62f6561af16e80e1e82b4f7a749662fb44de41 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_feynman, architecture=x86_64, release=553, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, distribution-scope=public, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git) Nov 23 02:41:55 localhost systemd[1]: var-lib-containers-storage-overlay-011047d0b4d77d3d24d8e373b3252ca3cd04a801f8836c228b66fd78c8056840-merged.mount: Deactivated successfully. Nov 23 02:41:55 localhost podman[30986]: 2025-11-23 07:41:55.379113001 +0000 UTC m=+0.074655580 container remove deca512c14eab837aa09e43cfd62f6561af16e80e1e82b4f7a749662fb44de41 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_feynman, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, version=7, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 02:41:55 localhost systemd[1]: libpod-conmon-deca512c14eab837aa09e43cfd62f6561af16e80e1e82b4f7a749662fb44de41.scope: Deactivated successfully. Nov 23 02:41:55 localhost podman[31006]: Nov 23 02:41:55 localhost podman[31006]: 2025-11-23 07:41:55.584322034 +0000 UTC m=+0.071026467 container create ae9716b396d26ec3c3f3762350e69b0a3a8c2042163cd075fc2b190b1d46992f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_borg, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, version=7, io.buildah.version=1.33.12, release=553, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7) Nov 23 02:41:55 localhost systemd[1]: Started libpod-conmon-ae9716b396d26ec3c3f3762350e69b0a3a8c2042163cd075fc2b190b1d46992f.scope. Nov 23 02:41:55 localhost systemd[1]: Started libcrun container. Nov 23 02:41:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ada225fd7633f2ab49849b04ee7d91f661022d51f10d8ebb803e61c692e72a/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:55 localhost podman[31006]: 2025-11-23 07:41:55.560051944 +0000 UTC m=+0.046756397 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:41:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ada225fd7633f2ab49849b04ee7d91f661022d51f10d8ebb803e61c692e72a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ada225fd7633f2ab49849b04ee7d91f661022d51f10d8ebb803e61c692e72a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:55 localhost podman[31006]: 2025-11-23 07:41:55.681426398 +0000 UTC m=+0.168130831 container init ae9716b396d26ec3c3f3762350e69b0a3a8c2042163cd075fc2b190b1d46992f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_borg, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, RELEASE=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, release=553, CEPH_POINT_RELEASE=, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 02:41:55 localhost podman[31006]: 2025-11-23 07:41:55.69199152 +0000 UTC m=+0.178695973 container start ae9716b396d26ec3c3f3762350e69b0a3a8c2042163cd075fc2b190b1d46992f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_borg, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , version=7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True) Nov 23 02:41:55 localhost podman[31006]: 2025-11-23 07:41:55.692371352 +0000 UTC m=+0.179075795 container attach ae9716b396d26ec3c3f3762350e69b0a3a8c2042163cd075fc2b190b1d46992f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_borg, io.openshift.expose-services=, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vendor=Red Hat, Inc., RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, ceph=True, GIT_CLEAN=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, release=553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph) Nov 23 02:41:55 localhost hopeful_borg[31021]: { Nov 23 02:41:55 localhost hopeful_borg[31021]: "2": [ Nov 23 02:41:55 localhost hopeful_borg[31021]: { Nov 23 02:41:55 localhost hopeful_borg[31021]: "devices": [ Nov 23 02:41:55 localhost hopeful_borg[31021]: "/dev/loop3" Nov 23 02:41:55 localhost hopeful_borg[31021]: ], Nov 23 02:41:55 localhost hopeful_borg[31021]: "lv_name": "ceph_lv0", Nov 23 02:41:55 localhost hopeful_borg[31021]: "lv_path": "/dev/ceph_vg0/ceph_lv0", Nov 23 02:41:55 localhost hopeful_borg[31021]: "lv_size": "7511998464", Nov 23 02:41:55 localhost hopeful_borg[31021]: "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=t42sgB-Oyb7-LfLy-hcz8-w98Y-rNw3-TW3IEB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=46550e70-79cb-5f55-bf6d-1204b97e083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b8d6ed04-737e-449f-9be4-28f685f59869,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Nov 23 02:41:55 localhost hopeful_borg[31021]: "lv_uuid": "t42sgB-Oyb7-LfLy-hcz8-w98Y-rNw3-TW3IEB", Nov 23 02:41:55 localhost hopeful_borg[31021]: "name": "ceph_lv0", Nov 23 02:41:55 localhost hopeful_borg[31021]: "path": "/dev/ceph_vg0/ceph_lv0", Nov 23 02:41:55 localhost hopeful_borg[31021]: "tags": { Nov 23 02:41:55 localhost hopeful_borg[31021]: "ceph.block_device": "/dev/ceph_vg0/ceph_lv0", Nov 23 02:41:55 localhost hopeful_borg[31021]: "ceph.block_uuid": "t42sgB-Oyb7-LfLy-hcz8-w98Y-rNw3-TW3IEB", Nov 23 02:41:55 localhost hopeful_borg[31021]: "ceph.cephx_lockbox_secret": "", Nov 23 02:41:55 localhost hopeful_borg[31021]: "ceph.cluster_fsid": "46550e70-79cb-5f55-bf6d-1204b97e083b", Nov 23 02:41:55 localhost hopeful_borg[31021]: "ceph.cluster_name": "ceph", Nov 23 02:41:55 localhost hopeful_borg[31021]: "ceph.crush_device_class": "", Nov 23 02:41:55 localhost hopeful_borg[31021]: "ceph.encrypted": "0", Nov 23 02:41:55 localhost hopeful_borg[31021]: "ceph.osd_fsid": "b8d6ed04-737e-449f-9be4-28f685f59869", Nov 23 02:41:55 localhost hopeful_borg[31021]: "ceph.osd_id": "2", Nov 23 02:41:55 localhost hopeful_borg[31021]: "ceph.osdspec_affinity": "default_drive_group", Nov 23 02:41:55 localhost hopeful_borg[31021]: "ceph.type": "block", Nov 23 02:41:55 localhost hopeful_borg[31021]: "ceph.vdo": "0" Nov 23 02:41:55 localhost hopeful_borg[31021]: }, Nov 23 02:41:55 localhost hopeful_borg[31021]: "type": "block", Nov 23 02:41:55 localhost hopeful_borg[31021]: "vg_name": "ceph_vg0" Nov 23 02:41:55 localhost hopeful_borg[31021]: } Nov 23 02:41:55 localhost hopeful_borg[31021]: ], Nov 23 02:41:55 localhost hopeful_borg[31021]: "5": [ Nov 23 02:41:55 localhost hopeful_borg[31021]: { Nov 23 02:41:55 localhost hopeful_borg[31021]: "devices": [ Nov 23 02:41:55 localhost hopeful_borg[31021]: "/dev/loop4" Nov 23 02:41:55 localhost hopeful_borg[31021]: ], Nov 23 02:41:55 localhost hopeful_borg[31021]: "lv_name": "ceph_lv1", Nov 23 02:41:55 localhost hopeful_borg[31021]: "lv_path": "/dev/ceph_vg1/ceph_lv1", Nov 23 02:41:55 localhost hopeful_borg[31021]: "lv_size": "7511998464", Nov 23 02:41:56 localhost hopeful_borg[31021]: "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=xTwKJl-5OHl-IUHM-iAy5-RgcY-OQC2-bH25Bz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=46550e70-79cb-5f55-bf6d-1204b97e083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9cfc5456-16f1-4b7f-b9c4-30f67cbbd45e,ceph.osd_id=5,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Nov 23 02:41:56 localhost hopeful_borg[31021]: "lv_uuid": "xTwKJl-5OHl-IUHM-iAy5-RgcY-OQC2-bH25Bz", Nov 23 02:41:56 localhost hopeful_borg[31021]: "name": "ceph_lv1", Nov 23 02:41:56 localhost hopeful_borg[31021]: "path": "/dev/ceph_vg1/ceph_lv1", Nov 23 02:41:56 localhost hopeful_borg[31021]: "tags": { Nov 23 02:41:56 localhost hopeful_borg[31021]: "ceph.block_device": "/dev/ceph_vg1/ceph_lv1", Nov 23 02:41:56 localhost hopeful_borg[31021]: "ceph.block_uuid": "xTwKJl-5OHl-IUHM-iAy5-RgcY-OQC2-bH25Bz", Nov 23 02:41:56 localhost hopeful_borg[31021]: "ceph.cephx_lockbox_secret": "", Nov 23 02:41:56 localhost hopeful_borg[31021]: "ceph.cluster_fsid": "46550e70-79cb-5f55-bf6d-1204b97e083b", Nov 23 02:41:56 localhost hopeful_borg[31021]: "ceph.cluster_name": "ceph", Nov 23 02:41:56 localhost hopeful_borg[31021]: "ceph.crush_device_class": "", Nov 23 02:41:56 localhost hopeful_borg[31021]: "ceph.encrypted": "0", Nov 23 02:41:56 localhost hopeful_borg[31021]: "ceph.osd_fsid": "9cfc5456-16f1-4b7f-b9c4-30f67cbbd45e", Nov 23 02:41:56 localhost hopeful_borg[31021]: "ceph.osd_id": "5", Nov 23 02:41:56 localhost hopeful_borg[31021]: "ceph.osdspec_affinity": "default_drive_group", Nov 23 02:41:56 localhost hopeful_borg[31021]: "ceph.type": "block", Nov 23 02:41:56 localhost hopeful_borg[31021]: "ceph.vdo": "0" Nov 23 02:41:56 localhost hopeful_borg[31021]: }, Nov 23 02:41:56 localhost hopeful_borg[31021]: "type": "block", Nov 23 02:41:56 localhost hopeful_borg[31021]: "vg_name": "ceph_vg1" Nov 23 02:41:56 localhost hopeful_borg[31021]: } Nov 23 02:41:56 localhost hopeful_borg[31021]: ] Nov 23 02:41:56 localhost hopeful_borg[31021]: } Nov 23 02:41:56 localhost systemd[1]: libpod-ae9716b396d26ec3c3f3762350e69b0a3a8c2042163cd075fc2b190b1d46992f.scope: Deactivated successfully. Nov 23 02:41:56 localhost podman[31006]: 2025-11-23 07:41:56.033494356 +0000 UTC m=+0.520198839 container died ae9716b396d26ec3c3f3762350e69b0a3a8c2042163cd075fc2b190b1d46992f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_borg, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2025-09-24T08:57:55, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, RELEASE=main, io.openshift.expose-services=, name=rhceph) Nov 23 02:41:56 localhost podman[31030]: 2025-11-23 07:41:56.127177962 +0000 UTC m=+0.079053249 container remove ae9716b396d26ec3c3f3762350e69b0a3a8c2042163cd075fc2b190b1d46992f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_borg, GIT_BRANCH=main, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, version=7, CEPH_POINT_RELEASE=, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph) Nov 23 02:41:56 localhost systemd[1]: libpod-conmon-ae9716b396d26ec3c3f3762350e69b0a3a8c2042163cd075fc2b190b1d46992f.scope: Deactivated successfully. Nov 23 02:41:56 localhost systemd[1]: var-lib-containers-storage-overlay-c0ada225fd7633f2ab49849b04ee7d91f661022d51f10d8ebb803e61c692e72a-merged.mount: Deactivated successfully. Nov 23 02:41:56 localhost podman[31118]: Nov 23 02:41:56 localhost podman[31118]: 2025-11-23 07:41:56.894310901 +0000 UTC m=+0.075530179 container create 4c267e7f1f1abad843db10d94b70359ef1d4dec3ed0439d785841ac63bf2e354 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_mcclintock, RELEASE=main, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_CLEAN=True, distribution-scope=public, ceph=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container) Nov 23 02:41:56 localhost systemd[1]: Started libpod-conmon-4c267e7f1f1abad843db10d94b70359ef1d4dec3ed0439d785841ac63bf2e354.scope. Nov 23 02:41:56 localhost systemd[1]: Started libcrun container. Nov 23 02:41:56 localhost podman[31118]: 2025-11-23 07:41:56.862993379 +0000 UTC m=+0.044212657 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:41:56 localhost podman[31118]: 2025-11-23 07:41:56.960037871 +0000 UTC m=+0.141257159 container init 4c267e7f1f1abad843db10d94b70359ef1d4dec3ed0439d785841ac63bf2e354 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_mcclintock, RELEASE=main, name=rhceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-type=git, distribution-scope=public, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, release=553, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main) Nov 23 02:41:56 localhost podman[31118]: 2025-11-23 07:41:56.973509713 +0000 UTC m=+0.154728991 container start 4c267e7f1f1abad843db10d94b70359ef1d4dec3ed0439d785841ac63bf2e354 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_mcclintock, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , distribution-scope=public, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, ceph=True, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, RELEASE=main, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 02:41:56 localhost podman[31118]: 2025-11-23 07:41:56.973806442 +0000 UTC m=+0.155025760 container attach 4c267e7f1f1abad843db10d94b70359ef1d4dec3ed0439d785841ac63bf2e354 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_mcclintock, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, version=7, ceph=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux , RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, distribution-scope=public, release=553, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git) Nov 23 02:41:56 localhost practical_mcclintock[31133]: 167 167 Nov 23 02:41:56 localhost systemd[1]: libpod-4c267e7f1f1abad843db10d94b70359ef1d4dec3ed0439d785841ac63bf2e354.scope: Deactivated successfully. Nov 23 02:41:56 localhost podman[31118]: 2025-11-23 07:41:56.978628884 +0000 UTC m=+0.159848182 container died 4c267e7f1f1abad843db10d94b70359ef1d4dec3ed0439d785841ac63bf2e354 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_mcclintock, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, release=553, GIT_BRANCH=main, maintainer=Guillaume Abrioux , name=rhceph, version=7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64) Nov 23 02:41:57 localhost podman[31138]: 2025-11-23 07:41:57.07067724 +0000 UTC m=+0.082238899 container remove 4c267e7f1f1abad843db10d94b70359ef1d4dec3ed0439d785841ac63bf2e354 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_mcclintock, vendor=Red Hat, Inc., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_CLEAN=True, version=7, architecture=x86_64, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git) Nov 23 02:41:57 localhost systemd[1]: libpod-conmon-4c267e7f1f1abad843db10d94b70359ef1d4dec3ed0439d785841ac63bf2e354.scope: Deactivated successfully. Nov 23 02:41:57 localhost podman[31168]: Nov 23 02:41:57 localhost podman[31168]: 2025-11-23 07:41:57.359796673 +0000 UTC m=+0.072225336 container create 6f5b5217d05cb5d00e9d038476bb56ea9a6bfa650173b6119105933d3d29380f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate-test, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, ceph=True, release=553, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, distribution-scope=public, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux ) Nov 23 02:41:57 localhost systemd[1]: var-lib-containers-storage-overlay-f85e8dbd459e61b1523713f9c291a38b57f78addb1e6ad55eb398d7e05ac415e-merged.mount: Deactivated successfully. Nov 23 02:41:57 localhost systemd[1]: Started libpod-conmon-6f5b5217d05cb5d00e9d038476bb56ea9a6bfa650173b6119105933d3d29380f.scope. Nov 23 02:41:57 localhost systemd[1]: Started libcrun container. Nov 23 02:41:57 localhost podman[31168]: 2025-11-23 07:41:57.333477348 +0000 UTC m=+0.045906011 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:41:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee5b9f93c8ed500f3681f318f1b657a0a8452c717e125f7feecddbae298e520/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee5b9f93c8ed500f3681f318f1b657a0a8452c717e125f7feecddbae298e520/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee5b9f93c8ed500f3681f318f1b657a0a8452c717e125f7feecddbae298e520/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee5b9f93c8ed500f3681f318f1b657a0a8452c717e125f7feecddbae298e520/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee5b9f93c8ed500f3681f318f1b657a0a8452c717e125f7feecddbae298e520/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:57 localhost podman[31168]: 2025-11-23 07:41:57.495349523 +0000 UTC m=+0.207778186 container init 6f5b5217d05cb5d00e9d038476bb56ea9a6bfa650173b6119105933d3d29380f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate-test, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, ceph=True, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, version=7, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 02:41:57 localhost podman[31168]: 2025-11-23 07:41:57.506238343 +0000 UTC m=+0.218667016 container start 6f5b5217d05cb5d00e9d038476bb56ea9a6bfa650173b6119105933d3d29380f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate-test, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, distribution-scope=public, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 02:41:57 localhost podman[31168]: 2025-11-23 07:41:57.507099771 +0000 UTC m=+0.219528434 container attach 6f5b5217d05cb5d00e9d038476bb56ea9a6bfa650173b6119105933d3d29380f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate-test, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, name=rhceph, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, version=7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True) Nov 23 02:41:57 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate-test[31184]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Nov 23 02:41:57 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate-test[31184]: [--no-systemd] [--no-tmpfs] Nov 23 02:41:57 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate-test[31184]: ceph-volume activate: error: unrecognized arguments: --bad-option Nov 23 02:41:57 localhost systemd[1]: libpod-6f5b5217d05cb5d00e9d038476bb56ea9a6bfa650173b6119105933d3d29380f.scope: Deactivated successfully. Nov 23 02:41:57 localhost podman[31168]: 2025-11-23 07:41:57.736311587 +0000 UTC m=+0.448740310 container died 6f5b5217d05cb5d00e9d038476bb56ea9a6bfa650173b6119105933d3d29380f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate-test, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph) Nov 23 02:41:57 localhost podman[31189]: 2025-11-23 07:41:57.813331631 +0000 UTC m=+0.068540580 container remove 6f5b5217d05cb5d00e9d038476bb56ea9a6bfa650173b6119105933d3d29380f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate-test, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., RELEASE=main, GIT_BRANCH=main, release=553, com.redhat.component=rhceph-container, ceph=True, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph) Nov 23 02:41:57 localhost systemd-journald[617]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Nov 23 02:41:57 localhost systemd-journald[617]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 23 02:41:57 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 02:41:57 localhost systemd[1]: libpod-conmon-6f5b5217d05cb5d00e9d038476bb56ea9a6bfa650173b6119105933d3d29380f.scope: Deactivated successfully. Nov 23 02:41:57 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 02:41:58 localhost systemd[1]: Reloading. Nov 23 02:41:58 localhost systemd-rc-local-generator[31241]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:41:58 localhost systemd-sysv-generator[31246]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:41:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:41:58 localhost systemd[1]: tmp-crun.SKbOU9.mount: Deactivated successfully. Nov 23 02:41:58 localhost systemd[1]: var-lib-containers-storage-overlay-aee5b9f93c8ed500f3681f318f1b657a0a8452c717e125f7feecddbae298e520-merged.mount: Deactivated successfully. Nov 23 02:41:58 localhost systemd[1]: Reloading. Nov 23 02:41:58 localhost systemd-rc-local-generator[31285]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:41:58 localhost systemd-sysv-generator[31288]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:41:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:41:58 localhost systemd[1]: Starting Ceph osd.2 for 46550e70-79cb-5f55-bf6d-1204b97e083b... Nov 23 02:41:58 localhost podman[31349]: Nov 23 02:41:58 localhost podman[31349]: 2025-11-23 07:41:58.979644663 +0000 UTC m=+0.071744960 container create 758b881617cac62f274ab29b39fb3e614f9c7546bb98215b34b2a00d1be20480 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, name=rhceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux , version=7) Nov 23 02:41:59 localhost systemd[1]: tmp-crun.QUsyRs.mount: Deactivated successfully. Nov 23 02:41:59 localhost systemd[1]: Started libcrun container. Nov 23 02:41:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91d671822d872eebd42d75f30915575f140a0eebc355a7c2325d01ce9d674827/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:59 localhost podman[31349]: 2025-11-23 07:41:58.950749057 +0000 UTC m=+0.042849344 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:41:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91d671822d872eebd42d75f30915575f140a0eebc355a7c2325d01ce9d674827/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91d671822d872eebd42d75f30915575f140a0eebc355a7c2325d01ce9d674827/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91d671822d872eebd42d75f30915575f140a0eebc355a7c2325d01ce9d674827/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91d671822d872eebd42d75f30915575f140a0eebc355a7c2325d01ce9d674827/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff) Nov 23 02:41:59 localhost podman[31349]: 2025-11-23 07:41:59.105719395 +0000 UTC m=+0.197819702 container init 758b881617cac62f274ab29b39fb3e614f9c7546bb98215b34b2a00d1be20480 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, version=7) Nov 23 02:41:59 localhost podman[31349]: 2025-11-23 07:41:59.119195578 +0000 UTC m=+0.211295885 container start 758b881617cac62f274ab29b39fb3e614f9c7546bb98215b34b2a00d1be20480 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, ceph=True, io.buildah.version=1.33.12, GIT_BRANCH=main, vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, distribution-scope=public, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7) Nov 23 02:41:59 localhost podman[31349]: 2025-11-23 07:41:59.119538948 +0000 UTC m=+0.211639245 container attach 758b881617cac62f274ab29b39fb3e614f9c7546bb98215b34b2a00d1be20480 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_CLEAN=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, vcs-type=git, maintainer=Guillaume Abrioux , architecture=x86_64, version=7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.openshift.expose-services=, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 02:41:59 localhost systemd[1]: tmp-crun.eXSZO3.mount: Deactivated successfully. Nov 23 02:41:59 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate[31363]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Nov 23 02:41:59 localhost bash[31349]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Nov 23 02:41:59 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate[31363]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Nov 23 02:41:59 localhost bash[31349]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Nov 23 02:41:59 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate[31363]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Nov 23 02:41:59 localhost bash[31349]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Nov 23 02:41:59 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate[31363]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Nov 23 02:41:59 localhost bash[31349]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Nov 23 02:41:59 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate[31363]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block Nov 23 02:41:59 localhost bash[31349]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block Nov 23 02:41:59 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate[31363]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Nov 23 02:41:59 localhost bash[31349]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 Nov 23 02:41:59 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate[31363]: --> ceph-volume raw activate successful for osd ID: 2 Nov 23 02:41:59 localhost bash[31349]: --> ceph-volume raw activate successful for osd ID: 2 Nov 23 02:41:59 localhost systemd[1]: libpod-758b881617cac62f274ab29b39fb3e614f9c7546bb98215b34b2a00d1be20480.scope: Deactivated successfully. Nov 23 02:41:59 localhost podman[31349]: 2025-11-23 07:41:59.824607071 +0000 UTC m=+0.916707398 container died 758b881617cac62f274ab29b39fb3e614f9c7546bb98215b34b2a00d1be20480 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, distribution-scope=public, GIT_BRANCH=main, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, version=7, name=rhceph, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-type=git) Nov 23 02:41:59 localhost systemd[1]: tmp-crun.NQEzfL.mount: Deactivated successfully. Nov 23 02:41:59 localhost systemd[1]: var-lib-containers-storage-overlay-91d671822d872eebd42d75f30915575f140a0eebc355a7c2325d01ce9d674827-merged.mount: Deactivated successfully. Nov 23 02:41:59 localhost podman[31488]: 2025-11-23 07:41:59.90943065 +0000 UTC m=+0.074106023 container remove 758b881617cac62f274ab29b39fb3e614f9c7546bb98215b34b2a00d1be20480 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2-activate, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-type=git, maintainer=Guillaume Abrioux , ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, name=rhceph, io.openshift.expose-services=, version=7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc.) Nov 23 02:42:00 localhost podman[31550]: Nov 23 02:42:00 localhost podman[31550]: 2025-11-23 07:42:00.229784604 +0000 UTC m=+0.074455996 container create 80016b2e6addf9c12d26443233463b56bc5686e1a1be352e6c075ef55d61c608 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, distribution-scope=public, ceph=True, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, build-date=2025-09-24T08:57:55) Nov 23 02:42:00 localhost podman[31550]: 2025-11-23 07:42:00.20002351 +0000 UTC m=+0.044694932 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:42:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028e31ed4f5e419ab433c6df72e131eb8db66f4f95f2985be4c6e22e0c44d905/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028e31ed4f5e419ab433c6df72e131eb8db66f4f95f2985be4c6e22e0c44d905/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028e31ed4f5e419ab433c6df72e131eb8db66f4f95f2985be4c6e22e0c44d905/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028e31ed4f5e419ab433c6df72e131eb8db66f4f95f2985be4c6e22e0c44d905/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028e31ed4f5e419ab433c6df72e131eb8db66f4f95f2985be4c6e22e0c44d905/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:00 localhost podman[31550]: 2025-11-23 07:42:00.370124403 +0000 UTC m=+0.214795805 container init 80016b2e6addf9c12d26443233463b56bc5686e1a1be352e6c075ef55d61c608 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, name=rhceph, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, version=7, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 02:42:00 localhost podman[31550]: 2025-11-23 07:42:00.378247278 +0000 UTC m=+0.222918680 container start 80016b2e6addf9c12d26443233463b56bc5686e1a1be352e6c075ef55d61c608 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., release=553, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 23 02:42:00 localhost bash[31550]: 80016b2e6addf9c12d26443233463b56bc5686e1a1be352e6c075ef55d61c608 Nov 23 02:42:00 localhost systemd[1]: Started Ceph osd.2 for 46550e70-79cb-5f55-bf6d-1204b97e083b. Nov 23 02:42:00 localhost ceph-osd[31569]: set uid:gid to 167:167 (ceph:ceph) Nov 23 02:42:00 localhost ceph-osd[31569]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Nov 23 02:42:00 localhost ceph-osd[31569]: pidfile_write: ignore empty --pid-file Nov 23 02:42:00 localhost ceph-osd[31569]: bdev(0x563f4675ae00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Nov 23 02:42:00 localhost ceph-osd[31569]: bdev(0x563f4675ae00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Nov 23 02:42:00 localhost ceph-osd[31569]: bdev(0x563f4675ae00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 23 02:42:00 localhost ceph-osd[31569]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Nov 23 02:42:00 localhost ceph-osd[31569]: bdev(0x563f4675b180 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Nov 23 02:42:00 localhost ceph-osd[31569]: bdev(0x563f4675b180 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Nov 23 02:42:00 localhost ceph-osd[31569]: bdev(0x563f4675b180 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 23 02:42:00 localhost ceph-osd[31569]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB Nov 23 02:42:00 localhost ceph-osd[31569]: bdev(0x563f4675b180 /var/lib/ceph/osd/ceph-2/block) close Nov 23 02:42:00 localhost ceph-osd[31569]: bdev(0x563f4675ae00 /var/lib/ceph/osd/ceph-2/block) close Nov 23 02:42:00 localhost ceph-osd[31569]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal Nov 23 02:42:00 localhost ceph-osd[31569]: load: jerasure load: lrc Nov 23 02:42:00 localhost ceph-osd[31569]: bdev(0x563f4675ae00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Nov 23 02:42:00 localhost ceph-osd[31569]: bdev(0x563f4675ae00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Nov 23 02:42:00 localhost ceph-osd[31569]: bdev(0x563f4675ae00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 23 02:42:00 localhost ceph-osd[31569]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Nov 23 02:42:00 localhost ceph-osd[31569]: bdev(0x563f4675ae00 /var/lib/ceph/osd/ceph-2/block) close Nov 23 02:42:01 localhost podman[31664]: Nov 23 02:42:01 localhost podman[31664]: 2025-11-23 07:42:01.181800328 +0000 UTC m=+0.073134634 container create 9f62561370b642067c6aee50da56358ba8483c74fbb4c0054884aa074cbd3950 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_aryabhata, GIT_CLEAN=True, ceph=True, build-date=2025-09-24T08:57:55, name=rhceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, distribution-scope=public, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container) Nov 23 02:42:01 localhost systemd[1]: Started libpod-conmon-9f62561370b642067c6aee50da56358ba8483c74fbb4c0054884aa074cbd3950.scope. Nov 23 02:42:01 localhost ceph-osd[31569]: bdev(0x563f4675ae00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Nov 23 02:42:01 localhost ceph-osd[31569]: bdev(0x563f4675ae00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Nov 23 02:42:01 localhost ceph-osd[31569]: bdev(0x563f4675ae00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 23 02:42:01 localhost ceph-osd[31569]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Nov 23 02:42:01 localhost ceph-osd[31569]: bdev(0x563f4675ae00 /var/lib/ceph/osd/ceph-2/block) close Nov 23 02:42:01 localhost systemd[1]: Started libcrun container. Nov 23 02:42:01 localhost podman[31664]: 2025-11-23 07:42:01.152454078 +0000 UTC m=+0.043788414 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:42:01 localhost podman[31664]: 2025-11-23 07:42:01.260873857 +0000 UTC m=+0.152208173 container init 9f62561370b642067c6aee50da56358ba8483c74fbb4c0054884aa074cbd3950 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_aryabhata, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux , name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 02:42:01 localhost podman[31664]: 2025-11-23 07:42:01.269412204 +0000 UTC m=+0.160746510 container start 9f62561370b642067c6aee50da56358ba8483c74fbb4c0054884aa074cbd3950 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_aryabhata, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, RELEASE=main, architecture=x86_64, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_CLEAN=True, io.openshift.expose-services=, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph) Nov 23 02:42:01 localhost podman[31664]: 2025-11-23 07:42:01.269615701 +0000 UTC m=+0.160950067 container attach 9f62561370b642067c6aee50da56358ba8483c74fbb4c0054884aa074cbd3950 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_aryabhata, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, distribution-scope=public, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, architecture=x86_64, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc.) Nov 23 02:42:01 localhost exciting_aryabhata[31677]: 167 167 Nov 23 02:42:01 localhost systemd[1]: libpod-9f62561370b642067c6aee50da56358ba8483c74fbb4c0054884aa074cbd3950.scope: Deactivated successfully. Nov 23 02:42:01 localhost podman[31664]: 2025-11-23 07:42:01.273507962 +0000 UTC m=+0.164842268 container died 9f62561370b642067c6aee50da56358ba8483c74fbb4c0054884aa074cbd3950 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_aryabhata, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, name=rhceph, GIT_BRANCH=main) Nov 23 02:42:01 localhost podman[31686]: 2025-11-23 07:42:01.361349027 +0000 UTC m=+0.079778373 container remove 9f62561370b642067c6aee50da56358ba8483c74fbb4c0054884aa074cbd3950 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_aryabhata, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_BRANCH=main, release=553, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, name=rhceph, io.openshift.expose-services=, distribution-scope=public, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55) Nov 23 02:42:01 localhost systemd[1]: var-lib-containers-storage-overlay-fdad559fdac0fc717830becb4ffb4a16e934ecc9203dfe3df3ddf97fc851b5f4-merged.mount: Deactivated successfully. Nov 23 02:42:01 localhost systemd[1]: libpod-conmon-9f62561370b642067c6aee50da56358ba8483c74fbb4c0054884aa074cbd3950.scope: Deactivated successfully. Nov 23 02:42:01 localhost ceph-osd[31569]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Nov 23 02:42:01 localhost ceph-osd[31569]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Nov 23 02:42:01 localhost ceph-osd[31569]: bdev(0x563f4675ae00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Nov 23 02:42:01 localhost ceph-osd[31569]: bdev(0x563f4675ae00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Nov 23 02:42:01 localhost ceph-osd[31569]: bdev(0x563f4675ae00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 23 02:42:01 localhost ceph-osd[31569]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Nov 23 02:42:01 localhost ceph-osd[31569]: bdev(0x563f4675b180 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Nov 23 02:42:01 localhost ceph-osd[31569]: bdev(0x563f4675b180 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Nov 23 02:42:01 localhost ceph-osd[31569]: bdev(0x563f4675b180 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 23 02:42:01 localhost ceph-osd[31569]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB Nov 23 02:42:01 localhost ceph-osd[31569]: bluefs mount Nov 23 02:42:01 localhost ceph-osd[31569]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Nov 23 02:42:01 localhost ceph-osd[31569]: bluefs mount shared_bdev_used = 0 Nov 23 02:42:01 localhost ceph-osd[31569]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: RocksDB version: 7.9.2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Git sha 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Compile date 2025-09-23 00:00:00 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: DB SUMMARY Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: DB Session ID: VLRRVW6JOW0TYKEO4B8J Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: CURRENT file: CURRENT Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: IDENTITY file: IDENTITY Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.error_if_exists: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.create_if_missing: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.flush_verify_memtable_count: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.env: 0x563f469eecb0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.fs: LegacyFileSystem Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.info_log: 0x563f476dc340 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_file_opening_threads: 16 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.statistics: (nil) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.use_fsync: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_log_file_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_manifest_file_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.log_file_time_to_roll: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.keep_log_file_num: 1000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.recycle_log_file_num: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.allow_fallocate: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.allow_mmap_reads: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.allow_mmap_writes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.use_direct_reads: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.create_missing_column_families: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.db_log_dir: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.wal_dir: db.wal Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_cache_numshardbits: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.WAL_ttl_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.WAL_size_limit_MB: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.manifest_preallocation_size: 4194304 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.is_fd_close_on_exec: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.advise_random_on_open: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.db_write_buffer_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_manager: 0x563f46744140 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.access_hint_on_compaction_start: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.random_access_max_buffer_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.use_adaptive_mutex: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.rate_limiter: (nil) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.wal_recovery_mode: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_thread_tracking: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_pipelined_write: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.unordered_write: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.allow_concurrent_memtable_write: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_thread_max_yield_usec: 100 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_thread_slow_yield_usec: 3 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.row_cache: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.wal_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.avoid_flush_during_recovery: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.allow_ingest_behind: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.two_write_queues: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.manual_wal_flush: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.wal_compression: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.atomic_flush: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.persist_stats_to_disk: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_dbid_to_manifest: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.log_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.file_checksum_gen_factory: Unknown Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.best_efforts_recovery: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.allow_data_in_errors: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.db_host_id: __hostname__ Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enforce_single_del_contracts: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_background_jobs: 4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_background_compactions: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_subcompactions: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.avoid_flush_during_shutdown: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.writable_file_max_buffer_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.delayed_write_rate : 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_total_wal_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.stats_dump_period_sec: 600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.stats_persist_period_sec: 600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.stats_history_buffer_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_open_files: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bytes_per_sync: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.wal_bytes_per_sync: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.strict_bytes_per_sync: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_readahead_size: 2097152 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_background_flushes: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Compression algorithms supported: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: #011kZSTD supported: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: #011kXpressCompression supported: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: #011kBZip2Compression supported: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: #011kLZ4Compression supported: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: #011kZlibCompression supported: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: #011kLZ4HCCompression supported: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: #011kSnappyCompression supported: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Fast CRC32 supported: Supported on x86 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: DMutex implementation: pthread_mutex_t Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f476dc500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f46732850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f476dc500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f46732850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f476dc500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f46732850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f476dc500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f46732850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f476dc500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f46732850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f476dc500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f46732850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f476dc500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f46732850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f476dc720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f467322d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f476dc720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f467322d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f476dc720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f467322d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a2fd661e-a2f2-46cc-8536-b8a226589f80 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883721539610, "job": 1, "event": "recovery_started", "wal_files": [31]} Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883721539879, "job": 1, "event": "recovery_finished"} Nov 23 02:42:01 localhost ceph-osd[31569]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Nov 23 02:42:01 localhost ceph-osd[31569]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025 Nov 23 02:42:01 localhost ceph-osd[31569]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240 Nov 23 02:42:01 localhost ceph-osd[31569]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Nov 23 02:42:01 localhost ceph-osd[31569]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000 Nov 23 02:42:01 localhost ceph-osd[31569]: freelist init Nov 23 02:42:01 localhost ceph-osd[31569]: freelist _read_cfg Nov 23 02:42:01 localhost ceph-osd[31569]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Nov 23 02:42:01 localhost ceph-osd[31569]: bluefs umount Nov 23 02:42:01 localhost ceph-osd[31569]: bdev(0x563f4675b180 /var/lib/ceph/osd/ceph-2/block) close Nov 23 02:42:01 localhost podman[31911]: Nov 23 02:42:01 localhost podman[31911]: 2025-11-23 07:42:01.6852436 +0000 UTC m=+0.072787193 container create defeccaf46b379b2a385ba4c1a2f95cc597f0c30b83cb8b05ac19db8350aaad7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate-test, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, ceph=True, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, release=553, maintainer=Guillaume Abrioux , name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55) Nov 23 02:42:01 localhost systemd[1]: Started libpod-conmon-defeccaf46b379b2a385ba4c1a2f95cc597f0c30b83cb8b05ac19db8350aaad7.scope. Nov 23 02:42:01 localhost systemd[1]: Started libcrun container. Nov 23 02:42:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdaf0b82bddaa087b70f4c3e86c0fbb3da30359fd3f783b4f5ac221ea8cc57c7/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:01 localhost podman[31911]: 2025-11-23 07:42:01.656244461 +0000 UTC m=+0.043788074 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:42:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdaf0b82bddaa087b70f4c3e86c0fbb3da30359fd3f783b4f5ac221ea8cc57c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdaf0b82bddaa087b70f4c3e86c0fbb3da30359fd3f783b4f5ac221ea8cc57c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdaf0b82bddaa087b70f4c3e86c0fbb3da30359fd3f783b4f5ac221ea8cc57c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdaf0b82bddaa087b70f4c3e86c0fbb3da30359fd3f783b4f5ac221ea8cc57c7/merged/var/lib/ceph/osd/ceph-5 supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:01 localhost podman[31911]: 2025-11-23 07:42:01.808161213 +0000 UTC m=+0.195704796 container init defeccaf46b379b2a385ba4c1a2f95cc597f0c30b83cb8b05ac19db8350aaad7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate-test, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, io.buildah.version=1.33.12, GIT_CLEAN=True, RELEASE=main, GIT_BRANCH=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 02:42:01 localhost ceph-osd[31569]: bdev(0x563f4675b180 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block Nov 23 02:42:01 localhost ceph-osd[31569]: bdev(0x563f4675b180 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument Nov 23 02:42:01 localhost ceph-osd[31569]: bdev(0x563f4675b180 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 23 02:42:01 localhost ceph-osd[31569]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB Nov 23 02:42:01 localhost ceph-osd[31569]: bluefs mount Nov 23 02:42:01 localhost ceph-osd[31569]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Nov 23 02:42:01 localhost ceph-osd[31569]: bluefs mount shared_bdev_used = 4718592 Nov 23 02:42:01 localhost ceph-osd[31569]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Nov 23 02:42:01 localhost podman[31911]: 2025-11-23 07:42:01.820537112 +0000 UTC m=+0.208080695 container start defeccaf46b379b2a385ba4c1a2f95cc597f0c30b83cb8b05ac19db8350aaad7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate-test, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, ceph=True, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, RELEASE=main, io.openshift.expose-services=) Nov 23 02:42:01 localhost podman[31911]: 2025-11-23 07:42:01.821962716 +0000 UTC m=+0.209506339 container attach defeccaf46b379b2a385ba4c1a2f95cc597f0c30b83cb8b05ac19db8350aaad7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate-test, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=553, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, version=7, name=rhceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: RocksDB version: 7.9.2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Git sha 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Compile date 2025-09-23 00:00:00 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: DB SUMMARY Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: DB Session ID: VLRRVW6JOW0TYKEO4B8I Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: CURRENT file: CURRENT Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: IDENTITY file: IDENTITY Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.error_if_exists: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.create_if_missing: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.flush_verify_memtable_count: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.env: 0x563f469efea0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.fs: LegacyFileSystem Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.info_log: 0x563f47778640 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_file_opening_threads: 16 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.statistics: (nil) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.use_fsync: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_log_file_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_manifest_file_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.log_file_time_to_roll: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.keep_log_file_num: 1000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.recycle_log_file_num: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.allow_fallocate: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.allow_mmap_reads: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.allow_mmap_writes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.use_direct_reads: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.create_missing_column_families: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.db_log_dir: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.wal_dir: db.wal Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_cache_numshardbits: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.WAL_ttl_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.WAL_size_limit_MB: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.manifest_preallocation_size: 4194304 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.is_fd_close_on_exec: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.advise_random_on_open: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.db_write_buffer_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_manager: 0x563f46745540 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.access_hint_on_compaction_start: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.random_access_max_buffer_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.use_adaptive_mutex: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.rate_limiter: (nil) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.wal_recovery_mode: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_thread_tracking: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_pipelined_write: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.unordered_write: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.allow_concurrent_memtable_write: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_thread_max_yield_usec: 100 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_thread_slow_yield_usec: 3 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.row_cache: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.wal_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.avoid_flush_during_recovery: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.allow_ingest_behind: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.two_write_queues: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.manual_wal_flush: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.wal_compression: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.atomic_flush: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.persist_stats_to_disk: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_dbid_to_manifest: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.log_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.file_checksum_gen_factory: Unknown Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.best_efforts_recovery: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.allow_data_in_errors: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.db_host_id: __hostname__ Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enforce_single_del_contracts: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_background_jobs: 4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_background_compactions: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_subcompactions: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.avoid_flush_during_shutdown: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.writable_file_max_buffer_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.delayed_write_rate : 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_total_wal_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.stats_dump_period_sec: 600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.stats_persist_period_sec: 600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.stats_history_buffer_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_open_files: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bytes_per_sync: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.wal_bytes_per_sync: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.strict_bytes_per_sync: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_readahead_size: 2097152 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_background_flushes: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Compression algorithms supported: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: #011kZSTD supported: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: #011kXpressCompression supported: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: #011kBZip2Compression supported: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: #011kLZ4Compression supported: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: #011kZlibCompression supported: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: #011kLZ4HCCompression supported: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: #011kSnappyCompression supported: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Fast CRC32 supported: Supported on x86 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: DMutex implementation: pthread_mutex_t Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f477793e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f46733610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f477793e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f46733610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f477793e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f46733610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f477793e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f46733610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f477793e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f46733610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f477793e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f46733610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f477793e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f46733610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f47778b60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f467322d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f47778b60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f467322d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.merge_operator: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f47778b60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563f467322d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression: LZ4 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.num_levels: 7 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a2fd661e-a2f2-46cc-8536-b8a226589f80 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883721839675, "job": 1, "event": "recovery_started", "wal_files": [31]} Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883721845786, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763883721, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2fd661e-a2f2-46cc-8536-b8a226589f80", "db_session_id": "VLRRVW6JOW0TYKEO4B8I", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883721850013, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763883721, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2fd661e-a2f2-46cc-8536-b8a226589f80", "db_session_id": "VLRRVW6JOW0TYKEO4B8I", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883721853619, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763883721, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2fd661e-a2f2-46cc-8536-b8a226589f80", "db_session_id": "VLRRVW6JOW0TYKEO4B8I", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883721857092, "job": 1, "event": "recovery_finished"} Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563f47780380 Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: DB pointer 0x563f47635a00 Nov 23 02:42:01 localhost ceph-osd[31569]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Nov 23 02:42:01 localhost ceph-osd[31569]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4 Nov 23 02:42:01 localhost ceph-osd[31569]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 02:42:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f46733610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f46733610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f46733610#2 capacity: 460.80 MB usag Nov 23 02:42:01 localhost ceph-osd[31569]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Nov 23 02:42:01 localhost ceph-osd[31569]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Nov 23 02:42:01 localhost ceph-osd[31569]: _get_class not permitted to load lua Nov 23 02:42:01 localhost ceph-osd[31569]: _get_class not permitted to load sdk Nov 23 02:42:01 localhost ceph-osd[31569]: _get_class not permitted to load test_remote_reads Nov 23 02:42:01 localhost ceph-osd[31569]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients Nov 23 02:42:01 localhost ceph-osd[31569]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Nov 23 02:42:01 localhost ceph-osd[31569]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds Nov 23 02:42:01 localhost ceph-osd[31569]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Nov 23 02:42:01 localhost ceph-osd[31569]: osd.2 0 load_pgs Nov 23 02:42:01 localhost ceph-osd[31569]: osd.2 0 load_pgs opened 0 pgs Nov 23 02:42:01 localhost ceph-osd[31569]: osd.2 0 log_to_monitors true Nov 23 02:42:01 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2[31565]: 2025-11-23T07:42:01.893+0000 7fc4bdea8a80 -1 osd.2 0 log_to_monitors true Nov 23 02:42:02 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate-test[31927]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Nov 23 02:42:02 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate-test[31927]: [--no-systemd] [--no-tmpfs] Nov 23 02:42:02 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate-test[31927]: ceph-volume activate: error: unrecognized arguments: --bad-option Nov 23 02:42:02 localhost systemd[1]: libpod-defeccaf46b379b2a385ba4c1a2f95cc597f0c30b83cb8b05ac19db8350aaad7.scope: Deactivated successfully. Nov 23 02:42:02 localhost podman[31911]: 2025-11-23 07:42:02.048053684 +0000 UTC m=+0.435597267 container died defeccaf46b379b2a385ba4c1a2f95cc597f0c30b83cb8b05ac19db8350aaad7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate-test, CEPH_POINT_RELEASE=, ceph=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.expose-services=, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553) Nov 23 02:42:02 localhost podman[32147]: 2025-11-23 07:42:02.108615962 +0000 UTC m=+0.054418807 container remove defeccaf46b379b2a385ba4c1a2f95cc597f0c30b83cb8b05ac19db8350aaad7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate-test, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, build-date=2025-09-24T08:57:55, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, architecture=x86_64, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container) Nov 23 02:42:02 localhost systemd[1]: libpod-conmon-defeccaf46b379b2a385ba4c1a2f95cc597f0c30b83cb8b05ac19db8350aaad7.scope: Deactivated successfully. Nov 23 02:42:02 localhost systemd[1]: Reloading. Nov 23 02:42:02 localhost systemd-rc-local-generator[32202]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:42:02 localhost systemd-sysv-generator[32206]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:42:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:42:02 localhost systemd[1]: var-lib-containers-storage-overlay-bdaf0b82bddaa087b70f4c3e86c0fbb3da30359fd3f783b4f5ac221ea8cc57c7-merged.mount: Deactivated successfully. Nov 23 02:42:02 localhost systemd[1]: Reloading. Nov 23 02:42:02 localhost systemd-rc-local-generator[32241]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:42:02 localhost systemd-sysv-generator[32244]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:42:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:42:02 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Nov 23 02:42:02 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Nov 23 02:42:02 localhost systemd[1]: Starting Ceph osd.5 for 46550e70-79cb-5f55-bf6d-1204b97e083b... Nov 23 02:42:03 localhost podman[32309]: Nov 23 02:42:03 localhost podman[32309]: 2025-11-23 07:42:03.26107292 +0000 UTC m=+0.078535373 container create bcd4e15bfca2374ab7960fe0bc52c3eb31ebcd52a9439ea834606dba4a96f366 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.33.12, RELEASE=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.openshift.tags=rhceph ceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-type=git, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7) Nov 23 02:42:03 localhost systemd[1]: tmp-crun.1qW3K3.mount: Deactivated successfully. Nov 23 02:42:03 localhost systemd[1]: Started libcrun container. Nov 23 02:42:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f00a8442782aceaf0d4c554c28e9a3d230855b48b24c92a33159f46a89af46bc/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:03 localhost podman[32309]: 2025-11-23 07:42:03.231114251 +0000 UTC m=+0.048576714 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:42:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f00a8442782aceaf0d4c554c28e9a3d230855b48b24c92a33159f46a89af46bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f00a8442782aceaf0d4c554c28e9a3d230855b48b24c92a33159f46a89af46bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f00a8442782aceaf0d4c554c28e9a3d230855b48b24c92a33159f46a89af46bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f00a8442782aceaf0d4c554c28e9a3d230855b48b24c92a33159f46a89af46bc/merged/var/lib/ceph/osd/ceph-5 supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:03 localhost podman[32309]: 2025-11-23 07:42:03.382470416 +0000 UTC m=+0.199932879 container init bcd4e15bfca2374ab7960fe0bc52c3eb31ebcd52a9439ea834606dba4a96f366 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, RELEASE=main, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=) Nov 23 02:42:03 localhost podman[32309]: 2025-11-23 07:42:03.392344396 +0000 UTC m=+0.209806849 container start bcd4e15bfca2374ab7960fe0bc52c3eb31ebcd52a9439ea834606dba4a96f366 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, CEPH_POINT_RELEASE=, name=rhceph, build-date=2025-09-24T08:57:55, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7) Nov 23 02:42:03 localhost podman[32309]: 2025-11-23 07:42:03.392588043 +0000 UTC m=+0.210050496 container attach bcd4e15bfca2374ab7960fe0bc52c3eb31ebcd52a9439ea834606dba4a96f366 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, name=rhceph, version=7, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., GIT_CLEAN=True, architecture=x86_64, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 02:42:03 localhost ceph-osd[31569]: osd.2 0 done with init, starting boot process Nov 23 02:42:03 localhost ceph-osd[31569]: osd.2 0 start_boot Nov 23 02:42:03 localhost ceph-osd[31569]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1 Nov 23 02:42:03 localhost ceph-osd[31569]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Nov 23 02:42:03 localhost ceph-osd[31569]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Nov 23 02:42:03 localhost ceph-osd[31569]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Nov 23 02:42:03 localhost ceph-osd[31569]: osd.2 0 bench count 12288000 bsize 4 KiB Nov 23 02:42:04 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate[32324]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Nov 23 02:42:04 localhost bash[32309]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Nov 23 02:42:04 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate[32324]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-5 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Nov 23 02:42:04 localhost bash[32309]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-5 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Nov 23 02:42:04 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate[32324]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Nov 23 02:42:04 localhost bash[32309]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Nov 23 02:42:04 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate[32324]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Nov 23 02:42:04 localhost bash[32309]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Nov 23 02:42:04 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate[32324]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-5/block Nov 23 02:42:04 localhost bash[32309]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-5/block Nov 23 02:42:04 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate[32324]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Nov 23 02:42:04 localhost bash[32309]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Nov 23 02:42:04 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate[32324]: --> ceph-volume raw activate successful for osd ID: 5 Nov 23 02:42:04 localhost bash[32309]: --> ceph-volume raw activate successful for osd ID: 5 Nov 23 02:42:04 localhost systemd[1]: libpod-bcd4e15bfca2374ab7960fe0bc52c3eb31ebcd52a9439ea834606dba4a96f366.scope: Deactivated successfully. Nov 23 02:42:04 localhost podman[32309]: 2025-11-23 07:42:04.148323714 +0000 UTC m=+0.965786187 container died bcd4e15bfca2374ab7960fe0bc52c3eb31ebcd52a9439ea834606dba4a96f366 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, version=7, io.buildah.version=1.33.12, GIT_CLEAN=True, name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, RELEASE=main, io.openshift.expose-services=, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc.) Nov 23 02:42:04 localhost systemd[1]: var-lib-containers-storage-overlay-f00a8442782aceaf0d4c554c28e9a3d230855b48b24c92a33159f46a89af46bc-merged.mount: Deactivated successfully. Nov 23 02:42:04 localhost podman[32455]: 2025-11-23 07:42:04.261659987 +0000 UTC m=+0.098919462 container remove bcd4e15bfca2374ab7960fe0bc52c3eb31ebcd52a9439ea834606dba4a96f366 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5-activate, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, release=553, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, version=7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , distribution-scope=public, ceph=True) Nov 23 02:42:04 localhost podman[32516]: Nov 23 02:42:04 localhost podman[32516]: 2025-11-23 07:42:04.60817266 +0000 UTC m=+0.086095360 container create c5e518d273ae3f1b47d152dd12219eb38ff9bde62626a10e1148b91685bcdadc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-type=git, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, RELEASE=main, name=rhceph, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 02:42:04 localhost podman[32516]: 2025-11-23 07:42:04.571116678 +0000 UTC m=+0.049039428 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:42:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/474ef2966422002da0687eaae11e3690699ca2c7b7121e2d0bcb559af971ea2e/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/474ef2966422002da0687eaae11e3690699ca2c7b7121e2d0bcb559af971ea2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/474ef2966422002da0687eaae11e3690699ca2c7b7121e2d0bcb559af971ea2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/474ef2966422002da0687eaae11e3690699ca2c7b7121e2d0bcb559af971ea2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/474ef2966422002da0687eaae11e3690699ca2c7b7121e2d0bcb559af971ea2e/merged/var/lib/ceph/osd/ceph-5 supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:04 localhost podman[32516]: 2025-11-23 07:42:04.738233757 +0000 UTC m=+0.216156457 container init c5e518d273ae3f1b47d152dd12219eb38ff9bde62626a10e1148b91685bcdadc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.buildah.version=1.33.12, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, release=553, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True) Nov 23 02:42:04 localhost podman[32516]: 2025-11-23 07:42:04.756701136 +0000 UTC m=+0.234623846 container start c5e518d273ae3f1b47d152dd12219eb38ff9bde62626a10e1148b91685bcdadc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, name=rhceph, release=553, version=7, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vendor=Red Hat, Inc.) Nov 23 02:42:04 localhost bash[32516]: c5e518d273ae3f1b47d152dd12219eb38ff9bde62626a10e1148b91685bcdadc Nov 23 02:42:04 localhost systemd[1]: Started Ceph osd.5 for 46550e70-79cb-5f55-bf6d-1204b97e083b. Nov 23 02:42:04 localhost ceph-osd[32534]: set uid:gid to 167:167 (ceph:ceph) Nov 23 02:42:04 localhost ceph-osd[32534]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Nov 23 02:42:04 localhost ceph-osd[32534]: pidfile_write: ignore empty --pid-file Nov 23 02:42:04 localhost ceph-osd[32534]: bdev(0x56035b7bee00 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Nov 23 02:42:04 localhost ceph-osd[32534]: bdev(0x56035b7bee00 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Nov 23 02:42:04 localhost ceph-osd[32534]: bdev(0x56035b7bee00 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 23 02:42:04 localhost ceph-osd[32534]: bluestore(/var/lib/ceph/osd/ceph-5) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Nov 23 02:42:04 localhost ceph-osd[32534]: bdev(0x56035b7bf180 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Nov 23 02:42:04 localhost ceph-osd[32534]: bdev(0x56035b7bf180 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Nov 23 02:42:04 localhost ceph-osd[32534]: bdev(0x56035b7bf180 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 23 02:42:04 localhost ceph-osd[32534]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-5/block size 7.0 GiB Nov 23 02:42:04 localhost ceph-osd[32534]: bdev(0x56035b7bf180 /var/lib/ceph/osd/ceph-5/block) close Nov 23 02:42:05 localhost ceph-osd[32534]: bdev(0x56035b7bee00 /var/lib/ceph/osd/ceph-5/block) close Nov 23 02:42:05 localhost systemd[1]: tmp-crun.Ja5yqr.mount: Deactivated successfully. Nov 23 02:42:05 localhost ceph-osd[32534]: starting osd.5 osd_data /var/lib/ceph/osd/ceph-5 /var/lib/ceph/osd/ceph-5/journal Nov 23 02:42:05 localhost ceph-osd[32534]: load: jerasure load: lrc Nov 23 02:42:05 localhost ceph-osd[32534]: bdev(0x56035b7bee00 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Nov 23 02:42:05 localhost ceph-osd[32534]: bdev(0x56035b7bee00 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Nov 23 02:42:05 localhost ceph-osd[32534]: bdev(0x56035b7bee00 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 23 02:42:05 localhost ceph-osd[32534]: bluestore(/var/lib/ceph/osd/ceph-5) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Nov 23 02:42:05 localhost ceph-osd[32534]: bdev(0x56035b7bee00 /var/lib/ceph/osd/ceph-5/block) close Nov 23 02:42:05 localhost podman[32624]: Nov 23 02:42:05 localhost ceph-osd[32534]: bdev(0x56035b7bee00 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Nov 23 02:42:05 localhost ceph-osd[32534]: bdev(0x56035b7bee00 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Nov 23 02:42:05 localhost ceph-osd[32534]: bdev(0x56035b7bee00 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 23 02:42:05 localhost ceph-osd[32534]: bluestore(/var/lib/ceph/osd/ceph-5) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Nov 23 02:42:05 localhost ceph-osd[32534]: bdev(0x56035b7bee00 /var/lib/ceph/osd/ceph-5/block) close Nov 23 02:42:05 localhost podman[32624]: 2025-11-23 07:42:05.631018125 +0000 UTC m=+0.084836211 container create 0229516ea4f4040666c82cf70d126d90a7b9bebc5b9e4b9020210d6019df1c65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_goldwasser, maintainer=Guillaume Abrioux , distribution-scope=public, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 02:42:05 localhost systemd[1]: Started libpod-conmon-0229516ea4f4040666c82cf70d126d90a7b9bebc5b9e4b9020210d6019df1c65.scope. Nov 23 02:42:05 localhost podman[32624]: 2025-11-23 07:42:05.600278691 +0000 UTC m=+0.054096837 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:42:05 localhost systemd[1]: Started libcrun container. Nov 23 02:42:05 localhost podman[32624]: 2025-11-23 07:42:05.714896434 +0000 UTC m=+0.168714520 container init 0229516ea4f4040666c82cf70d126d90a7b9bebc5b9e4b9020210d6019df1c65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_goldwasser, com.redhat.component=rhceph-container, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=553, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, vendor=Red Hat, Inc., version=7, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 02:42:05 localhost podman[32624]: 2025-11-23 07:42:05.722304946 +0000 UTC m=+0.176123002 container start 0229516ea4f4040666c82cf70d126d90a7b9bebc5b9e4b9020210d6019df1c65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_goldwasser, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, architecture=x86_64, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=553, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, version=7, ceph=True, description=Red Hat Ceph Storage 7) Nov 23 02:42:05 localhost podman[32624]: 2025-11-23 07:42:05.722637436 +0000 UTC m=+0.176455572 container attach 0229516ea4f4040666c82cf70d126d90a7b9bebc5b9e4b9020210d6019df1c65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_goldwasser, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, vcs-type=git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=553) Nov 23 02:42:05 localhost stoic_goldwasser[32641]: 167 167 Nov 23 02:42:05 localhost systemd[1]: libpod-0229516ea4f4040666c82cf70d126d90a7b9bebc5b9e4b9020210d6019df1c65.scope: Deactivated successfully. Nov 23 02:42:05 localhost podman[32624]: 2025-11-23 07:42:05.727500079 +0000 UTC m=+0.181318155 container died 0229516ea4f4040666c82cf70d126d90a7b9bebc5b9e4b9020210d6019df1c65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_goldwasser, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, CEPH_POINT_RELEASE=, name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, RELEASE=main) Nov 23 02:42:05 localhost ceph-osd[31569]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 30.506 iops: 7809.661 elapsed_sec: 0.384 Nov 23 02:42:05 localhost ceph-osd[31569]: log_channel(cluster) log [WRN] : OSD bench result of 7809.660822 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Nov 23 02:42:05 localhost ceph-osd[31569]: osd.2 0 waiting for initial osdmap Nov 23 02:42:05 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2[31565]: 2025-11-23T07:42:05.817+0000 7fc4ba63c640 -1 osd.2 0 waiting for initial osdmap Nov 23 02:42:05 localhost podman[32646]: 2025-11-23 07:42:05.825750709 +0000 UTC m=+0.087971538 container remove 0229516ea4f4040666c82cf70d126d90a7b9bebc5b9e4b9020210d6019df1c65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_goldwasser, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, release=553, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, name=rhceph, version=7, ceph=True, com.redhat.component=rhceph-container, GIT_BRANCH=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 02:42:05 localhost ceph-osd[31569]: osd.2 12 crush map has features 288514050185494528, adjusting msgr requires for clients Nov 23 02:42:05 localhost ceph-osd[31569]: osd.2 12 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons Nov 23 02:42:05 localhost ceph-osd[31569]: osd.2 12 crush map has features 3314932999778484224, adjusting msgr requires for osds Nov 23 02:42:05 localhost ceph-osd[31569]: osd.2 12 check_osdmap_features require_osd_release unknown -> reef Nov 23 02:42:05 localhost systemd[1]: libpod-conmon-0229516ea4f4040666c82cf70d126d90a7b9bebc5b9e4b9020210d6019df1c65.scope: Deactivated successfully. Nov 23 02:42:05 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-2[31565]: 2025-11-23T07:42:05.837+0000 7fc4b5451640 -1 osd.2 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Nov 23 02:42:05 localhost ceph-osd[31569]: osd.2 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Nov 23 02:42:05 localhost ceph-osd[31569]: osd.2 12 set_numa_affinity not setting numa affinity Nov 23 02:42:05 localhost ceph-osd[31569]: osd.2 12 _collect_metadata loop3: no unique device id for loop3: fallback method has no model nor serial Nov 23 02:42:05 localhost ceph-osd[32534]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Nov 23 02:42:05 localhost ceph-osd[32534]: osd.5:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Nov 23 02:42:05 localhost ceph-osd[32534]: bdev(0x56035b7bee00 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Nov 23 02:42:05 localhost ceph-osd[32534]: bdev(0x56035b7bee00 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Nov 23 02:42:05 localhost ceph-osd[32534]: bdev(0x56035b7bee00 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 23 02:42:05 localhost ceph-osd[32534]: bluestore(/var/lib/ceph/osd/ceph-5) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Nov 23 02:42:05 localhost ceph-osd[32534]: bdev(0x56035b7bf180 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Nov 23 02:42:05 localhost ceph-osd[32534]: bdev(0x56035b7bf180 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Nov 23 02:42:05 localhost ceph-osd[32534]: bdev(0x56035b7bf180 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 23 02:42:05 localhost ceph-osd[32534]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-5/block size 7.0 GiB Nov 23 02:42:05 localhost ceph-osd[32534]: bluefs mount Nov 23 02:42:05 localhost ceph-osd[32534]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Nov 23 02:42:05 localhost ceph-osd[32534]: bluefs mount shared_bdev_used = 0 Nov 23 02:42:05 localhost ceph-osd[32534]: bluestore(/var/lib/ceph/osd/ceph-5) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: RocksDB version: 7.9.2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Git sha 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Compile date 2025-09-23 00:00:00 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: DB SUMMARY Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: DB Session ID: HMCUN0I6UA0JQA6JBY2T Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: CURRENT file: CURRENT Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: IDENTITY file: IDENTITY Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.error_if_exists: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.create_if_missing: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.paranoid_checks: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.flush_verify_memtable_count: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.env: 0x56035ba52cb0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.fs: LegacyFileSystem Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.info_log: 0x56035c742340 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_file_opening_threads: 16 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.statistics: (nil) Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.use_fsync: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_log_file_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_manifest_file_size: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.log_file_time_to_roll: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.keep_log_file_num: 1000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.recycle_log_file_num: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.allow_fallocate: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.allow_mmap_reads: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.allow_mmap_writes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.use_direct_reads: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.create_missing_column_families: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.db_log_dir: Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.wal_dir: db.wal Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_cache_numshardbits: 6 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.WAL_ttl_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.WAL_size_limit_MB: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.manifest_preallocation_size: 4194304 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.is_fd_close_on_exec: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.advise_random_on_open: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.db_write_buffer_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_manager: 0x56035b7a8140 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.access_hint_on_compaction_start: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.random_access_max_buffer_size: 1048576 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.use_adaptive_mutex: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.rate_limiter: (nil) Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.wal_recovery_mode: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_thread_tracking: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_pipelined_write: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.unordered_write: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.allow_concurrent_memtable_write: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.write_thread_max_yield_usec: 100 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.write_thread_slow_yield_usec: 3 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.row_cache: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.wal_filter: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.avoid_flush_during_recovery: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.allow_ingest_behind: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.two_write_queues: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.manual_wal_flush: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.wal_compression: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.atomic_flush: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.persist_stats_to_disk: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.write_dbid_to_manifest: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.log_readahead_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.file_checksum_gen_factory: Unknown Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.best_efforts_recovery: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.allow_data_in_errors: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.db_host_id: __hostname__ Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enforce_single_del_contracts: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_background_jobs: 4 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_background_compactions: -1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_subcompactions: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.avoid_flush_during_shutdown: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.writable_file_max_buffer_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.delayed_write_rate : 16777216 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_total_wal_size: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.stats_dump_period_sec: 600 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.stats_persist_period_sec: 600 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.stats_history_buffer_size: 1048576 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_open_files: -1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bytes_per_sync: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.wal_bytes_per_sync: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.strict_bytes_per_sync: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_readahead_size: 2097152 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_background_flushes: -1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Compression algorithms supported: Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: #011kZSTD supported: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: #011kXpressCompression supported: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: #011kBZip2Compression supported: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: #011kLZ4Compression supported: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: #011kZlibCompression supported: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: #011kLZ4HCCompression supported: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: #011kSnappyCompression supported: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Fast CRC32 supported: Supported on x86 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: DMutex implementation: pthread_mutex_t Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c742500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b796850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c742500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b796850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c742500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b796850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c742500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b796850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c742500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b796850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c742500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b796850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c742500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b796850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c742720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b7962d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c742720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b7962d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c742720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b7962d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1c4a0c0f-45e9-4f3b-9cc3-62b142fe6c6c Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883725926055, "job": 1, "event": "recovery_started", "wal_files": [31]} Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883725926353, "job": 1, "event": "recovery_finished"} Nov 23 02:42:05 localhost ceph-osd[32534]: bluestore(/var/lib/ceph/osd/ceph-5) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Nov 23 02:42:05 localhost ceph-osd[32534]: bluestore(/var/lib/ceph/osd/ceph-5) _open_super_meta old nid_max 1025 Nov 23 02:42:05 localhost ceph-osd[32534]: bluestore(/var/lib/ceph/osd/ceph-5) _open_super_meta old blobid_max 10240 Nov 23 02:42:05 localhost ceph-osd[32534]: bluestore(/var/lib/ceph/osd/ceph-5) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Nov 23 02:42:05 localhost ceph-osd[32534]: bluestore(/var/lib/ceph/osd/ceph-5) _open_super_meta min_alloc_size 0x1000 Nov 23 02:42:05 localhost ceph-osd[32534]: freelist init Nov 23 02:42:05 localhost ceph-osd[32534]: freelist _read_cfg Nov 23 02:42:05 localhost ceph-osd[32534]: bluestore(/var/lib/ceph/osd/ceph-5) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Nov 23 02:42:05 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Nov 23 02:42:05 localhost ceph-osd[32534]: bluefs umount Nov 23 02:42:05 localhost ceph-osd[32534]: bdev(0x56035b7bf180 /var/lib/ceph/osd/ceph-5/block) close Nov 23 02:42:06 localhost podman[32863]: Nov 23 02:42:06 localhost podman[32863]: 2025-11-23 07:42:06.041775591 +0000 UTC m=+0.074576328 container create 5c6074e2ff4dbaf9a62a612d81165247c8c1cffee2fa71028e5984fe6930b1fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_rubin, version=7, architecture=x86_64, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , release=553, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, ceph=True, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git) Nov 23 02:42:06 localhost systemd[1]: Started libpod-conmon-5c6074e2ff4dbaf9a62a612d81165247c8c1cffee2fa71028e5984fe6930b1fe.scope. Nov 23 02:42:06 localhost systemd[1]: Started libcrun container. Nov 23 02:42:06 localhost podman[32863]: 2025-11-23 07:42:06.013166264 +0000 UTC m=+0.045966981 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:42:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e49e4a06a5fc51d39a2b4a1111d6bd1284cb4024113e4a09159612f274fcb8/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e49e4a06a5fc51d39a2b4a1111d6bd1284cb4024113e4a09159612f274fcb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e49e4a06a5fc51d39a2b4a1111d6bd1284cb4024113e4a09159612f274fcb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:06 localhost podman[32863]: 2025-11-23 07:42:06.147433063 +0000 UTC m=+0.180233730 container init 5c6074e2ff4dbaf9a62a612d81165247c8c1cffee2fa71028e5984fe6930b1fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_rubin, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, vendor=Red Hat, Inc., io.buildah.version=1.33.12, RELEASE=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, architecture=x86_64) Nov 23 02:42:06 localhost podman[32863]: 2025-11-23 07:42:06.157869271 +0000 UTC m=+0.190669928 container start 5c6074e2ff4dbaf9a62a612d81165247c8c1cffee2fa71028e5984fe6930b1fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_rubin, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, architecture=x86_64, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 02:42:06 localhost podman[32863]: 2025-11-23 07:42:06.158109808 +0000 UTC m=+0.190910515 container attach 5c6074e2ff4dbaf9a62a612d81165247c8c1cffee2fa71028e5984fe6930b1fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_rubin, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, version=7, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, name=rhceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux , release=553, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container) Nov 23 02:42:06 localhost ceph-osd[32534]: bdev(0x56035b7bf180 /var/lib/ceph/osd/ceph-5/block) open path /var/lib/ceph/osd/ceph-5/block Nov 23 02:42:06 localhost ceph-osd[32534]: bdev(0x56035b7bf180 /var/lib/ceph/osd/ceph-5/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-5/block failed: (22) Invalid argument Nov 23 02:42:06 localhost ceph-osd[32534]: bdev(0x56035b7bf180 /var/lib/ceph/osd/ceph-5/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 23 02:42:06 localhost ceph-osd[32534]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-5/block size 7.0 GiB Nov 23 02:42:06 localhost ceph-osd[32534]: bluefs mount Nov 23 02:42:06 localhost ceph-osd[32534]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Nov 23 02:42:06 localhost ceph-osd[32534]: bluefs mount shared_bdev_used = 4718592 Nov 23 02:42:06 localhost ceph-osd[32534]: bluestore(/var/lib/ceph/osd/ceph-5) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: RocksDB version: 7.9.2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Git sha 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Compile date 2025-09-23 00:00:00 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: DB SUMMARY Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: DB Session ID: HMCUN0I6UA0JQA6JBY2S Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: CURRENT file: CURRENT Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: IDENTITY file: IDENTITY Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.error_if_exists: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.create_if_missing: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.paranoid_checks: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.flush_verify_memtable_count: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.env: 0x56035ba53dc0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.fs: LegacyFileSystem Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.info_log: 0x56035c81ee80 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_file_opening_threads: 16 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.statistics: (nil) Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.use_fsync: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_log_file_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_manifest_file_size: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.log_file_time_to_roll: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.keep_log_file_num: 1000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.recycle_log_file_num: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.allow_fallocate: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.allow_mmap_reads: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.allow_mmap_writes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.use_direct_reads: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.create_missing_column_families: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.db_log_dir: Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.wal_dir: db.wal Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_cache_numshardbits: 6 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.WAL_ttl_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.WAL_size_limit_MB: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.manifest_preallocation_size: 4194304 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.is_fd_close_on_exec: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.advise_random_on_open: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.db_write_buffer_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_manager: 0x56035b7a8140 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.access_hint_on_compaction_start: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.random_access_max_buffer_size: 1048576 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.use_adaptive_mutex: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.rate_limiter: (nil) Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.wal_recovery_mode: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_thread_tracking: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_pipelined_write: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.unordered_write: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.allow_concurrent_memtable_write: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.write_thread_max_yield_usec: 100 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.write_thread_slow_yield_usec: 3 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.row_cache: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.wal_filter: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.avoid_flush_during_recovery: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.allow_ingest_behind: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.two_write_queues: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.manual_wal_flush: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.wal_compression: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.atomic_flush: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.persist_stats_to_disk: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.write_dbid_to_manifest: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.log_readahead_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.file_checksum_gen_factory: Unknown Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.best_efforts_recovery: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.allow_data_in_errors: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.db_host_id: __hostname__ Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enforce_single_del_contracts: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_background_jobs: 4 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_background_compactions: -1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_subcompactions: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.avoid_flush_during_shutdown: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.writable_file_max_buffer_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.delayed_write_rate : 16777216 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_total_wal_size: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.stats_dump_period_sec: 600 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.stats_persist_period_sec: 600 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.stats_history_buffer_size: 1048576 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_open_files: -1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bytes_per_sync: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.wal_bytes_per_sync: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.strict_bytes_per_sync: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_readahead_size: 2097152 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_background_flushes: -1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Compression algorithms supported: Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: #011kZSTD supported: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: #011kXpressCompression supported: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: #011kBZip2Compression supported: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: #011kLZ4Compression supported: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: #011kZlibCompression supported: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: #011kLZ4HCCompression supported: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: #011kSnappyCompression supported: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Fast CRC32 supported: Supported on x86 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: DMutex implementation: pthread_mutex_t Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c743e60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b7962d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c743e60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b7962d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c743e60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b7962d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c743e60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b7962d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c743e60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b7962d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c743e60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b7962d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c743e60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b7962d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c81e0a0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b797610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c81e0a0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b797610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:06 localhost systemd[1]: tmp-crun.qN6PAu.mount: Deactivated successfully. Nov 23 02:42:06 localhost systemd[1]: var-lib-containers-storage-overlay-92f739c6d34de2c6783024535be88ab7df0123583c9b5a76280d713d0a98eaa1-merged.mount: Deactivated successfully. Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.merge_operator: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_filter_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.sst_partitioner_factory: None Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56035c81e0a0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56035b797610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.write_buffer_size: 16777216 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number: 64 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression: LZ4 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression: Disabled Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.num_levels: 7 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.level: 32767 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.enabled: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.arena_block_size: 1048576 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_support: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.bloom_locality: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.max_successive_merges: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.force_consistency_checks: 1 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.ttl: 2592000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_files: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.min_blob_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_size: 268435456 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1c4a0c0f-45e9-4f3b-9cc3-62b142fe6c6c Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883726192205, "job": 1, "event": "recovery_started", "wal_files": [31]} Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883726198798, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763883726, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1c4a0c0f-45e9-4f3b-9cc3-62b142fe6c6c", "db_session_id": "HMCUN0I6UA0JQA6JBY2S", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883726202766, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763883726, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1c4a0c0f-45e9-4f3b-9cc3-62b142fe6c6c", "db_session_id": "HMCUN0I6UA0JQA6JBY2S", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883726207235, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763883726, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1c4a0c0f-45e9-4f3b-9cc3-62b142fe6c6c", "db_session_id": "HMCUN0I6UA0JQA6JBY2S", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883726211125, "job": 1, "event": "recovery_finished"} Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56035b844700 Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: DB pointer 0x56035c699a00 Nov 23 02:42:06 localhost ceph-osd[32534]: bluestore(/var/lib/ceph/osd/ceph-5) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Nov 23 02:42:06 localhost ceph-osd[32534]: bluestore(/var/lib/ceph/osd/ceph-5) _upgrade_super from 4, latest 4 Nov 23 02:42:06 localhost ceph-osd[32534]: bluestore(/var/lib/ceph/osd/ceph-5) _upgrade_super done Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 02:42:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56035b7962d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56035b7962d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56035b7962d0#2 capacity: 460.80 MB usag Nov 23 02:42:06 localhost ceph-osd[32534]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Nov 23 02:42:06 localhost ceph-osd[32534]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Nov 23 02:42:06 localhost ceph-osd[32534]: _get_class not permitted to load lua Nov 23 02:42:06 localhost ceph-osd[32534]: _get_class not permitted to load sdk Nov 23 02:42:06 localhost ceph-osd[32534]: _get_class not permitted to load test_remote_reads Nov 23 02:42:06 localhost ceph-osd[32534]: osd.5 0 crush map has features 288232575208783872, adjusting msgr requires for clients Nov 23 02:42:06 localhost ceph-osd[32534]: osd.5 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Nov 23 02:42:06 localhost ceph-osd[32534]: osd.5 0 crush map has features 288232575208783872, adjusting msgr requires for osds Nov 23 02:42:06 localhost ceph-osd[32534]: osd.5 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Nov 23 02:42:06 localhost ceph-osd[32534]: osd.5 0 load_pgs Nov 23 02:42:06 localhost ceph-osd[32534]: osd.5 0 load_pgs opened 0 pgs Nov 23 02:42:06 localhost ceph-osd[32534]: osd.5 0 log_to_monitors true Nov 23 02:42:06 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5[32530]: 2025-11-23T07:42:06.253+0000 7fc094e0ea80 -1 osd.5 0 log_to_monitors true Nov 23 02:42:06 localhost ceph-osd[31569]: osd.2 13 state: booting -> active Nov 23 02:42:06 localhost romantic_rubin[32879]: { Nov 23 02:42:06 localhost romantic_rubin[32879]: "9cfc5456-16f1-4b7f-b9c4-30f67cbbd45e": { Nov 23 02:42:06 localhost romantic_rubin[32879]: "ceph_fsid": "46550e70-79cb-5f55-bf6d-1204b97e083b", Nov 23 02:42:06 localhost romantic_rubin[32879]: "device": "/dev/mapper/ceph_vg1-ceph_lv1", Nov 23 02:42:06 localhost romantic_rubin[32879]: "osd_id": 5, Nov 23 02:42:06 localhost romantic_rubin[32879]: "osd_uuid": "9cfc5456-16f1-4b7f-b9c4-30f67cbbd45e", Nov 23 02:42:06 localhost romantic_rubin[32879]: "type": "bluestore" Nov 23 02:42:06 localhost romantic_rubin[32879]: }, Nov 23 02:42:06 localhost romantic_rubin[32879]: "b8d6ed04-737e-449f-9be4-28f685f59869": { Nov 23 02:42:06 localhost romantic_rubin[32879]: "ceph_fsid": "46550e70-79cb-5f55-bf6d-1204b97e083b", Nov 23 02:42:06 localhost romantic_rubin[32879]: "device": "/dev/mapper/ceph_vg0-ceph_lv0", Nov 23 02:42:06 localhost romantic_rubin[32879]: "osd_id": 2, Nov 23 02:42:06 localhost romantic_rubin[32879]: "osd_uuid": "b8d6ed04-737e-449f-9be4-28f685f59869", Nov 23 02:42:06 localhost romantic_rubin[32879]: "type": "bluestore" Nov 23 02:42:06 localhost romantic_rubin[32879]: } Nov 23 02:42:06 localhost romantic_rubin[32879]: } Nov 23 02:42:06 localhost systemd[1]: libpod-5c6074e2ff4dbaf9a62a612d81165247c8c1cffee2fa71028e5984fe6930b1fe.scope: Deactivated successfully. Nov 23 02:42:06 localhost podman[32863]: 2025-11-23 07:42:06.723008297 +0000 UTC m=+0.755808964 container died 5c6074e2ff4dbaf9a62a612d81165247c8c1cffee2fa71028e5984fe6930b1fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_rubin, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, version=7, RELEASE=main, distribution-scope=public, maintainer=Guillaume Abrioux , release=553, name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, com.redhat.component=rhceph-container, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 02:42:06 localhost systemd[1]: tmp-crun.wbQ4mg.mount: Deactivated successfully. Nov 23 02:42:06 localhost systemd[1]: var-lib-containers-storage-overlay-d6e49e4a06a5fc51d39a2b4a1111d6bd1284cb4024113e4a09159612f274fcb8-merged.mount: Deactivated successfully. Nov 23 02:42:06 localhost podman[33130]: 2025-11-23 07:42:06.830720604 +0000 UTC m=+0.093402229 container remove 5c6074e2ff4dbaf9a62a612d81165247c8c1cffee2fa71028e5984fe6930b1fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_rubin, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, name=rhceph, version=7, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_BRANCH=main, RELEASE=main, release=553, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 02:42:06 localhost systemd[1]: libpod-conmon-5c6074e2ff4dbaf9a62a612d81165247c8c1cffee2fa71028e5984fe6930b1fe.scope: Deactivated successfully. Nov 23 02:42:07 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Nov 23 02:42:07 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Nov 23 02:42:08 localhost ceph-osd[32534]: osd.5 0 done with init, starting boot process Nov 23 02:42:08 localhost ceph-osd[32534]: osd.5 0 start_boot Nov 23 02:42:08 localhost ceph-osd[32534]: osd.5 0 maybe_override_options_for_qos osd_max_backfills set to 1 Nov 23 02:42:08 localhost ceph-osd[32534]: osd.5 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Nov 23 02:42:08 localhost ceph-osd[32534]: osd.5 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Nov 23 02:42:08 localhost ceph-osd[32534]: osd.5 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Nov 23 02:42:08 localhost ceph-osd[32534]: osd.5 0 bench count 12288000 bsize 4 KiB Nov 23 02:42:08 localhost ceph-osd[31569]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients Nov 23 02:42:08 localhost ceph-osd[31569]: osd.2 14 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons Nov 23 02:42:08 localhost ceph-osd[31569]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds Nov 23 02:42:08 localhost ceph-osd[31569]: osd.2 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [2] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 02:42:08 localhost podman[33257]: 2025-11-23 07:42:08.427033306 +0000 UTC m=+0.096872098 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, RELEASE=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., version=7, distribution-scope=public, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 02:42:08 localhost podman[33257]: 2025-11-23 07:42:08.558362673 +0000 UTC m=+0.228201495 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., name=rhceph, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , release=553, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.buildah.version=1.33.12, GIT_BRANCH=main) Nov 23 02:42:08 localhost ceph-osd[31569]: osd.2 pg_epoch: 15 pg[1.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [2] r=0 lpr=14 crt=0'0 mlcod 0'0 undersized+peered mbc={}] state: react AllReplicasActivated Activating complete Nov 23 02:42:09 localhost ceph-osd[31569]: osd.2 pg_epoch: 16 pg[1.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=16 pruub=14.989061356s) [2,3] r=0 lpr=16 pi=[14,16)/0 crt=0'0 mlcod 0'0 peered pruub 22.856912613s@ mbc={}] start_peering_interval up [2] -> [2,3], acting [2] -> [2,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 02:42:09 localhost ceph-osd[31569]: osd.2 pg_epoch: 16 pg[1.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=16 pruub=14.989061356s) [2,3] r=0 lpr=16 pi=[14,16)/0 crt=0'0 mlcod 0'0 unknown pruub 22.856912613s@ mbc={}] state: transitioning to Primary Nov 23 02:42:10 localhost podman[33456]: Nov 23 02:42:10 localhost podman[33456]: 2025-11-23 07:42:10.541092558 +0000 UTC m=+0.061928772 container create a76a8674f01ed08637feee2ee01096aa178fd94c8a6996ab2a9da397e82fe433 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_moser, version=7, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=553, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, name=rhceph, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 02:42:10 localhost systemd[25881]: Starting Mark boot as successful... Nov 23 02:42:10 localhost ceph-osd[32534]: osd.5 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 25.436 iops: 6511.685 elapsed_sec: 0.461 Nov 23 02:42:10 localhost ceph-osd[32534]: log_channel(cluster) log [WRN] : OSD bench result of 6511.684622 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Nov 23 02:42:10 localhost ceph-osd[32534]: osd.5 0 waiting for initial osdmap Nov 23 02:42:10 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5[32530]: 2025-11-23T07:42:10.550+0000 7fc0915a2640 -1 osd.5 0 waiting for initial osdmap Nov 23 02:42:10 localhost systemd[25881]: Finished Mark boot as successful. Nov 23 02:42:10 localhost ceph-osd[32534]: osd.5 16 crush map has features 288514051259236352, adjusting msgr requires for clients Nov 23 02:42:10 localhost ceph-osd[32534]: osd.5 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons Nov 23 02:42:10 localhost ceph-osd[32534]: osd.5 16 crush map has features 3314933000852226048, adjusting msgr requires for osds Nov 23 02:42:10 localhost ceph-osd[32534]: osd.5 16 check_osdmap_features require_osd_release unknown -> reef Nov 23 02:42:10 localhost ceph-osd[32534]: osd.5 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Nov 23 02:42:10 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-5[32530]: 2025-11-23T07:42:10.569+0000 7fc08c3b7640 -1 osd.5 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Nov 23 02:42:10 localhost ceph-osd[32534]: osd.5 16 set_numa_affinity not setting numa affinity Nov 23 02:42:10 localhost ceph-osd[32534]: osd.5 16 _collect_metadata loop4: no unique device id for loop4: fallback method has no model nor serial Nov 23 02:42:10 localhost systemd[1]: Started libpod-conmon-a76a8674f01ed08637feee2ee01096aa178fd94c8a6996ab2a9da397e82fe433.scope. Nov 23 02:42:10 localhost systemd[1]: Started libcrun container. Nov 23 02:42:10 localhost podman[33456]: 2025-11-23 07:42:10.6109995 +0000 UTC m=+0.131835724 container init a76a8674f01ed08637feee2ee01096aa178fd94c8a6996ab2a9da397e82fe433 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_moser, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., version=7, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 02:42:10 localhost podman[33456]: 2025-11-23 07:42:10.518289814 +0000 UTC m=+0.039126018 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:42:10 localhost podman[33456]: 2025-11-23 07:42:10.623011307 +0000 UTC m=+0.143847501 container start a76a8674f01ed08637feee2ee01096aa178fd94c8a6996ab2a9da397e82fe433 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_moser, vcs-type=git, CEPH_POINT_RELEASE=, ceph=True, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, io.openshift.expose-services=, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, release=553, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 02:42:10 localhost podman[33456]: 2025-11-23 07:42:10.623130331 +0000 UTC m=+0.143966525 container attach a76a8674f01ed08637feee2ee01096aa178fd94c8a6996ab2a9da397e82fe433 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_moser, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, architecture=x86_64, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 02:42:10 localhost nostalgic_moser[33474]: 167 167 Nov 23 02:42:10 localhost systemd[1]: libpod-a76a8674f01ed08637feee2ee01096aa178fd94c8a6996ab2a9da397e82fe433.scope: Deactivated successfully. Nov 23 02:42:10 localhost podman[33479]: 2025-11-23 07:42:10.701536159 +0000 UTC m=+0.058551707 container died a76a8674f01ed08637feee2ee01096aa178fd94c8a6996ab2a9da397e82fe433 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_moser, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vendor=Red Hat, Inc., GIT_CLEAN=True, ceph=True, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, distribution-scope=public, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=) Nov 23 02:42:10 localhost podman[33479]: 2025-11-23 07:42:10.738806657 +0000 UTC m=+0.095822155 container remove a76a8674f01ed08637feee2ee01096aa178fd94c8a6996ab2a9da397e82fe433 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_moser, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, release=553, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph) Nov 23 02:42:10 localhost systemd[1]: libpod-conmon-a76a8674f01ed08637feee2ee01096aa178fd94c8a6996ab2a9da397e82fe433.scope: Deactivated successfully. Nov 23 02:42:10 localhost ceph-osd[31569]: osd.2 pg_epoch: 17 pg[1.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=17) [2,4,3] r=0 lpr=17 pi=[14,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [2,3] -> [2,4,3], acting [2,3] -> [2,4,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 02:42:10 localhost ceph-osd[31569]: osd.2 pg_epoch: 17 pg[1.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=17) [2,4,3] r=0 lpr=17 pi=[14,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 02:42:10 localhost ceph-osd[32534]: osd.5 17 state: booting -> active Nov 23 02:42:10 localhost podman[33499]: Nov 23 02:42:10 localhost podman[33499]: 2025-11-23 07:42:10.952767324 +0000 UTC m=+0.075977433 container create 4dc8012d82b12584bf02460b746a94611dc5d1cf21838b8ae1cb74c7ec5f822b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_carver, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, RELEASE=main, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, ceph=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public) Nov 23 02:42:10 localhost systemd[1]: Started libpod-conmon-4dc8012d82b12584bf02460b746a94611dc5d1cf21838b8ae1cb74c7ec5f822b.scope. Nov 23 02:42:11 localhost systemd[1]: Started libcrun container. Nov 23 02:42:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fb75db7dc38dd1333218db975c951576411d297d3b8f6d333ae28589cedea8/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:11 localhost podman[33499]: 2025-11-23 07:42:10.924518368 +0000 UTC m=+0.047728497 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 02:42:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fb75db7dc38dd1333218db975c951576411d297d3b8f6d333ae28589cedea8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fb75db7dc38dd1333218db975c951576411d297d3b8f6d333ae28589cedea8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 02:42:11 localhost podman[33499]: 2025-11-23 07:42:11.053597045 +0000 UTC m=+0.176807144 container init 4dc8012d82b12584bf02460b746a94611dc5d1cf21838b8ae1cb74c7ec5f822b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_carver, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, name=rhceph, ceph=True, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_CLEAN=True, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 02:42:11 localhost podman[33499]: 2025-11-23 07:42:11.063529016 +0000 UTC m=+0.186739115 container start 4dc8012d82b12584bf02460b746a94611dc5d1cf21838b8ae1cb74c7ec5f822b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_carver, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, version=7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 23 02:42:11 localhost podman[33499]: 2025-11-23 07:42:11.064008871 +0000 UTC m=+0.187218970 container attach 4dc8012d82b12584bf02460b746a94611dc5d1cf21838b8ae1cb74c7ec5f822b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_carver, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, RELEASE=main, io.buildah.version=1.33.12, release=553, GIT_CLEAN=True, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph) Nov 23 02:42:11 localhost systemd[1]: tmp-crun.YPKdlx.mount: Deactivated successfully. Nov 23 02:42:11 localhost systemd[1]: var-lib-containers-storage-overlay-ca8ea0098684ee0087336e95bf2c20e7d15ba88a00f44cead6e322af3d3a9e99-merged.mount: Deactivated successfully. Nov 23 02:42:11 localhost ceph-osd[31569]: osd.2 pg_epoch: 18 pg[1.0( empty local-lis/les=17/18 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=17) [2,4,3] r=0 lpr=17 pi=[14,17)/0 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 02:42:11 localhost naughty_carver[33514]: [ Nov 23 02:42:11 localhost naughty_carver[33514]: { Nov 23 02:42:11 localhost naughty_carver[33514]: "available": false, Nov 23 02:42:11 localhost naughty_carver[33514]: "ceph_device": false, Nov 23 02:42:11 localhost naughty_carver[33514]: "device_id": "QEMU_DVD-ROM_QM00001", Nov 23 02:42:11 localhost naughty_carver[33514]: "lsm_data": {}, Nov 23 02:42:11 localhost naughty_carver[33514]: "lvs": [], Nov 23 02:42:11 localhost naughty_carver[33514]: "path": "/dev/sr0", Nov 23 02:42:11 localhost naughty_carver[33514]: "rejected_reasons": [ Nov 23 02:42:11 localhost naughty_carver[33514]: "Insufficient space (<5GB)", Nov 23 02:42:11 localhost naughty_carver[33514]: "Has a FileSystem" Nov 23 02:42:11 localhost naughty_carver[33514]: ], Nov 23 02:42:11 localhost naughty_carver[33514]: "sys_api": { Nov 23 02:42:11 localhost naughty_carver[33514]: "actuators": null, Nov 23 02:42:11 localhost naughty_carver[33514]: "device_nodes": "sr0", Nov 23 02:42:11 localhost naughty_carver[33514]: "human_readable_size": "482.00 KB", Nov 23 02:42:11 localhost naughty_carver[33514]: "id_bus": "ata", Nov 23 02:42:11 localhost naughty_carver[33514]: "model": "QEMU DVD-ROM", Nov 23 02:42:11 localhost naughty_carver[33514]: "nr_requests": "2", Nov 23 02:42:11 localhost naughty_carver[33514]: "partitions": {}, Nov 23 02:42:11 localhost naughty_carver[33514]: "path": "/dev/sr0", Nov 23 02:42:11 localhost naughty_carver[33514]: "removable": "1", Nov 23 02:42:11 localhost naughty_carver[33514]: "rev": "2.5+", Nov 23 02:42:11 localhost naughty_carver[33514]: "ro": "0", Nov 23 02:42:11 localhost naughty_carver[33514]: "rotational": "1", Nov 23 02:42:11 localhost naughty_carver[33514]: "sas_address": "", Nov 23 02:42:11 localhost naughty_carver[33514]: "sas_device_handle": "", Nov 23 02:42:11 localhost naughty_carver[33514]: "scheduler_mode": "mq-deadline", Nov 23 02:42:11 localhost naughty_carver[33514]: "sectors": 0, Nov 23 02:42:11 localhost naughty_carver[33514]: "sectorsize": "2048", Nov 23 02:42:11 localhost naughty_carver[33514]: "size": 493568.0, Nov 23 02:42:11 localhost naughty_carver[33514]: "support_discard": "0", Nov 23 02:42:11 localhost naughty_carver[33514]: "type": "disk", Nov 23 02:42:11 localhost naughty_carver[33514]: "vendor": "QEMU" Nov 23 02:42:11 localhost naughty_carver[33514]: } Nov 23 02:42:11 localhost naughty_carver[33514]: } Nov 23 02:42:11 localhost naughty_carver[33514]: ] Nov 23 02:42:11 localhost systemd[1]: libpod-4dc8012d82b12584bf02460b746a94611dc5d1cf21838b8ae1cb74c7ec5f822b.scope: Deactivated successfully. Nov 23 02:42:11 localhost podman[33499]: 2025-11-23 07:42:11.928889054 +0000 UTC m=+1.052099173 container died 4dc8012d82b12584bf02460b746a94611dc5d1cf21838b8ae1cb74c7ec5f822b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_carver, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, name=rhceph, build-date=2025-09-24T08:57:55, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, GIT_BRANCH=main, version=7, distribution-scope=public) Nov 23 02:42:12 localhost systemd[1]: var-lib-containers-storage-overlay-d8fb75db7dc38dd1333218db975c951576411d297d3b8f6d333ae28589cedea8-merged.mount: Deactivated successfully. Nov 23 02:42:12 localhost podman[34987]: 2025-11-23 07:42:12.024950226 +0000 UTC m=+0.084752708 container remove 4dc8012d82b12584bf02460b746a94611dc5d1cf21838b8ae1cb74c7ec5f822b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_carver, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, ceph=True, distribution-scope=public, name=rhceph, com.redhat.component=rhceph-container, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 02:42:12 localhost systemd[1]: libpod-conmon-4dc8012d82b12584bf02460b746a94611dc5d1cf21838b8ae1cb74c7ec5f822b.scope: Deactivated successfully. Nov 23 02:42:20 localhost systemd[1]: tmp-crun.F1vVLG.mount: Deactivated successfully. Nov 23 02:42:20 localhost podman[35113]: 2025-11-23 07:42:20.947348171 +0000 UTC m=+0.090701484 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, RELEASE=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, version=7, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container) Nov 23 02:42:21 localhost podman[35113]: 2025-11-23 07:42:21.055277884 +0000 UTC m=+0.198631157 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, com.redhat.component=rhceph-container, GIT_CLEAN=True, architecture=x86_64, RELEASE=main, name=rhceph, GIT_BRANCH=main, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 02:43:22 localhost systemd[1]: tmp-crun.wLErFH.mount: Deactivated successfully. Nov 23 02:43:22 localhost podman[35286]: 2025-11-23 07:43:22.860015199 +0000 UTC m=+0.073223457 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, release=553, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , GIT_CLEAN=True, RELEASE=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, version=7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 02:43:22 localhost podman[35286]: 2025-11-23 07:43:22.955220216 +0000 UTC m=+0.168428484 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, maintainer=Guillaume Abrioux , version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.33.12, release=553, vendor=Red Hat, Inc., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=) Nov 23 02:43:31 localhost systemd[1]: session-13.scope: Deactivated successfully. Nov 23 02:43:31 localhost systemd[1]: session-13.scope: Consumed 20.933s CPU time. Nov 23 02:43:31 localhost systemd-logind[760]: Session 13 logged out. Waiting for processes to exit. Nov 23 02:43:31 localhost systemd-logind[760]: Removed session 13. Nov 23 02:44:53 localhost sshd[35507]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:46:01 localhost systemd[25881]: Created slice User Background Tasks Slice. Nov 23 02:46:01 localhost systemd[25881]: Starting Cleanup of User's Temporary Files and Directories... Nov 23 02:46:01 localhost systemd[25881]: Finished Cleanup of User's Temporary Files and Directories. Nov 23 02:46:56 localhost sshd[35665]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:46:56 localhost systemd-logind[760]: New session 27 of user zuul. Nov 23 02:46:56 localhost systemd[1]: Started Session 27 of User zuul. Nov 23 02:46:56 localhost python3[35713]: ansible-ansible.legacy.ping Invoked with data=pong Nov 23 02:46:57 localhost python3[35758]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 02:46:58 localhost python3[35778]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005532584.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Nov 23 02:46:58 localhost python3[35834]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:46:58 localhost python3[35877]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763884018.3311014-66722-116843131784497/source _original_basename=tmp8g3zcyaw follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:46:59 localhost python3[35907]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:46:59 localhost python3[35923]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:47:00 localhost python3[35939]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:47:00 localhost python3[35955]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUOGMsJ/AVVgY0xZSxHG1Oo8GuV12NJwOZUmjQnvkVRDmS001A30AXXR0N8dst9ZQnVZ7t0kBrbCnuEI8SNNpeziiyScPCPK73L2zyV8Js+qKXswkPpolOCRy92kOph1OZYuXdhUodUSdBoJ4mwf2s5nJhAJmH6XvfiqHUaCRd9Gp9NgU9FvjG41eO7BwkjRpKTg2jZAy21PGLDeWxRI5qxEpDgdXeW0riuuVHj1FGKKfC1wAe7xB5wykXcRkuog4VlSx2/V+mPpSMDZ1POsAxKOAMYkxfj+qoDIBfDc0R1cbxFehgmCHc8a4z+IjP5eiUvX3HjeV7ZBTR5hkYKHAJfeU6Cj5HQsTwwJrc+oHuosokgJ/ct0+WpvqhalUoL8dpoLUY6PQq+5CeOJrpZeLzXZTIPLWTA4jbbkHa/SwmAk07+hpxpFz3NhSfpT4GfOgKnowPfo+3mJMDAetTMZpizTdfPfc13gl7Zyqb9cB8lgx1IVzN6ZrxPyvPqj05uPk= zuul-build-sshkey#012 regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:47:01 localhost python3[35969]: ansible-ping Invoked with data=pong Nov 23 02:47:12 localhost sshd[35970]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:47:12 localhost systemd-logind[760]: New session 28 of user tripleo-admin. Nov 23 02:47:12 localhost systemd[1]: Created slice User Slice of UID 1003. Nov 23 02:47:12 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Nov 23 02:47:12 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Nov 23 02:47:12 localhost systemd[1]: Starting User Manager for UID 1003... Nov 23 02:47:12 localhost systemd[35974]: Queued start job for default target Main User Target. Nov 23 02:47:12 localhost systemd[35974]: Created slice User Application Slice. Nov 23 02:47:12 localhost systemd[35974]: Started Mark boot as successful after the user session has run 2 minutes. Nov 23 02:47:12 localhost systemd[35974]: Started Daily Cleanup of User's Temporary Directories. Nov 23 02:47:12 localhost systemd[35974]: Reached target Paths. Nov 23 02:47:12 localhost systemd[35974]: Reached target Timers. Nov 23 02:47:12 localhost systemd[35974]: Starting D-Bus User Message Bus Socket... Nov 23 02:47:12 localhost systemd[35974]: Starting Create User's Volatile Files and Directories... Nov 23 02:47:12 localhost systemd[35974]: Listening on D-Bus User Message Bus Socket. Nov 23 02:47:12 localhost systemd[35974]: Reached target Sockets. Nov 23 02:47:12 localhost systemd[35974]: Finished Create User's Volatile Files and Directories. Nov 23 02:47:12 localhost systemd[35974]: Reached target Basic System. Nov 23 02:47:12 localhost systemd[35974]: Reached target Main User Target. Nov 23 02:47:12 localhost systemd[35974]: Startup finished in 119ms. Nov 23 02:47:12 localhost systemd[1]: Started User Manager for UID 1003. Nov 23 02:47:12 localhost systemd[1]: Started Session 28 of User tripleo-admin. Nov 23 02:47:13 localhost python3[36035]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Nov 23 02:47:18 localhost python3[36055]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config Nov 23 02:47:19 localhost python3[36071]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Nov 23 02:47:19 localhost python3[36119]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.qn_asp6ctmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:47:20 localhost python3[36149]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.qn_asp6ctmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:47:21 localhost python3[36165]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.qn_asp6ctmphosts insertbefore=BOF block=172.17.0.106 np0005532584.localdomain np0005532584#012172.18.0.106 np0005532584.storage.localdomain np0005532584.storage#012172.20.0.106 np0005532584.storagemgmt.localdomain np0005532584.storagemgmt#012172.17.0.106 np0005532584.internalapi.localdomain np0005532584.internalapi#012172.19.0.106 np0005532584.tenant.localdomain np0005532584.tenant#012192.168.122.106 np0005532584.ctlplane.localdomain np0005532584.ctlplane#012172.17.0.107 np0005532585.localdomain np0005532585#012172.18.0.107 np0005532585.storage.localdomain np0005532585.storage#012172.20.0.107 np0005532585.storagemgmt.localdomain np0005532585.storagemgmt#012172.17.0.107 np0005532585.internalapi.localdomain np0005532585.internalapi#012172.19.0.107 np0005532585.tenant.localdomain np0005532585.tenant#012192.168.122.107 np0005532585.ctlplane.localdomain np0005532585.ctlplane#012172.17.0.108 np0005532586.localdomain np0005532586#012172.18.0.108 np0005532586.storage.localdomain np0005532586.storage#012172.20.0.108 np0005532586.storagemgmt.localdomain np0005532586.storagemgmt#012172.17.0.108 np0005532586.internalapi.localdomain np0005532586.internalapi#012172.19.0.108 np0005532586.tenant.localdomain np0005532586.tenant#012192.168.122.108 np0005532586.ctlplane.localdomain np0005532586.ctlplane#012172.17.0.103 np0005532581.localdomain np0005532581#012172.18.0.103 np0005532581.storage.localdomain np0005532581.storage#012172.20.0.103 np0005532581.storagemgmt.localdomain np0005532581.storagemgmt#012172.17.0.103 np0005532581.internalapi.localdomain np0005532581.internalapi#012172.19.0.103 np0005532581.tenant.localdomain np0005532581.tenant#012192.168.122.103 np0005532581.ctlplane.localdomain np0005532581.ctlplane#012172.17.0.104 np0005532582.localdomain np0005532582#012172.18.0.104 np0005532582.storage.localdomain np0005532582.storage#012172.20.0.104 np0005532582.storagemgmt.localdomain np0005532582.storagemgmt#012172.17.0.104 np0005532582.internalapi.localdomain np0005532582.internalapi#012172.19.0.104 np0005532582.tenant.localdomain np0005532582.tenant#012192.168.122.104 np0005532582.ctlplane.localdomain np0005532582.ctlplane#012172.17.0.105 np0005532583.localdomain np0005532583#012172.18.0.105 np0005532583.storage.localdomain np0005532583.storage#012172.20.0.105 np0005532583.storagemgmt.localdomain np0005532583.storagemgmt#012172.17.0.105 np0005532583.internalapi.localdomain np0005532583.internalapi#012172.19.0.105 np0005532583.tenant.localdomain np0005532583.tenant#012192.168.122.105 np0005532583.ctlplane.localdomain np0005532583.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012192.168.122.99 overcloud.ctlplane.localdomain#012172.18.0.204 overcloud.storage.localdomain#012172.20.0.141 overcloud.storagemgmt.localdomain#012172.17.0.224 overcloud.internalapi.localdomain#012172.21.0.154 overcloud.localdomain#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:47:22 localhost python3[36181]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.qn_asp6ctmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:47:22 localhost python3[36198]: ansible-file Invoked with path=/tmp/ansible.qn_asp6ctmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:47:23 localhost python3[36214]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:47:24 localhost python3[36231]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 02:47:28 localhost python3[36250]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:47:29 localhost python3[36283]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 02:48:19 localhost sshd[37008]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:48:40 localhost kernel: SELinux: Converting 2700 SID table entries... Nov 23 02:48:40 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 02:48:40 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 02:48:40 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 02:48:40 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 02:48:40 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 02:48:40 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 02:48:40 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 02:48:40 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=6 res=1 Nov 23 02:48:40 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 02:48:40 localhost systemd[1]: Starting man-db-cache-update.service... Nov 23 02:48:40 localhost systemd[1]: Reloading. Nov 23 02:48:40 localhost systemd-rc-local-generator[37249]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:48:40 localhost systemd-sysv-generator[37252]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:48:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:48:40 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 23 02:48:41 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 23 02:48:41 localhost systemd[1]: Finished man-db-cache-update.service. Nov 23 02:48:41 localhost systemd[1]: run-r565ad52b5523414abe034fb297745bc6.service: Deactivated successfully. Nov 23 02:48:42 localhost python3[37692]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:48:44 localhost python3[37831]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 02:48:44 localhost systemd[1]: Reloading. Nov 23 02:48:44 localhost systemd-sysv-generator[37864]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:48:44 localhost systemd-rc-local-generator[37861]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:48:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:48:45 localhost python3[37885]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:48:46 localhost python3[37901]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:48:47 localhost python3[37918]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Nov 23 02:48:47 localhost python3[37936]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:48:48 localhost python3[37954]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:48:48 localhost python3[37972]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 02:48:49 localhost systemd[1]: Reloading Network Manager... Nov 23 02:48:49 localhost NetworkManager[5966]: [1763884129.6922] audit: op="reload" arg="0" pid=37975 uid=0 result="success" Nov 23 02:48:49 localhost NetworkManager[5966]: [1763884129.6936] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf)) Nov 23 02:48:49 localhost NetworkManager[5966]: [1763884129.6938] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged Nov 23 02:48:49 localhost systemd[1]: Reloaded Network Manager. Nov 23 02:48:50 localhost python3[37991]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:48:50 localhost python3[38008]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:48:50 localhost python3[38026]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:48:51 localhost python3[38042]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:48:52 localhost python3[38058]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None Nov 23 02:48:52 localhost python3[38074]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:48:53 localhost python3[38090]: ansible-blockinfile Invoked with path=/tmp/ansible.scypydvk block=[192.168.122.106]*,[np0005532584.ctlplane.localdomain]*,[172.17.0.106]*,[np0005532584.internalapi.localdomain]*,[172.18.0.106]*,[np0005532584.storage.localdomain]*,[172.20.0.106]*,[np0005532584.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005532584.tenant.localdomain]*,[np0005532584.localdomain]*,[np0005532584]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3OrbPXlomvlluk5pGQwXwJu+cR1IMLHg5EnGcI5epB1SB6q/EzlEo5+bOYmmvILsoesUzBIBq21mRhn1Wi2yjlys0pArFDqiLkUBvTW9ro6MKci9Smc12m7AkLus6UO6h3pzqcOdRZQ3KOQDL/83yYJVBCJyqlISXWzzHJpGRVnZHeT4CgKZ1nG5UEvOrtPXRAVWkz3v5TghJrYXvWaPQPmWcEy1rfhCjkCfQY++JB/Dlgammmd1+ZldadeXQi1b2X02a6GFyW0pUMFLjAP7Wr+KcRa5FIPmGwsPuc1NhveAH6zyLrabrh7jPR5O0tBjz9KcNYXbQmJetGt9ZWzFsl0qzXrvI38q5RlGptbqg0iSez61VBAUtnfs33hnYc3dvzJKXReR76PoU3yu/tLrhdK6szqIVsMdw2LGEro7l3KKMKXHSpi8n77fH8ICiU3F5Oif+nvS/e7xr4LccSEnFEHA9PdNxOWxJYLcxTQCt3BkNFrWw4oB1LiDsn98HlS8=#012[192.168.122.107]*,[np0005532585.ctlplane.localdomain]*,[172.17.0.107]*,[np0005532585.internalapi.localdomain]*,[172.18.0.107]*,[np0005532585.storage.localdomain]*,[172.20.0.107]*,[np0005532585.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005532585.tenant.localdomain]*,[np0005532585.localdomain]*,[np0005532585]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCU6ocW8HWtJJyWPSFUqcN5z70XYnNrE5KeWh/VJ4bDkpVePpxxcdD8r8cKL121q0MKPRgia3jLqnKz+o4MH3AqTAWCZamBc1+ePq9OvZDenK69byea8TM176uYzfePjNlud4LSZ6lfkgneO5jeNE6/RcHgBc8Me+2mlzpavioA814r6Ci6hFaEIOS1Zd2b/yKzI4QRl6xg/aJKvlIe9w3G3BvKOG5pixPx2ng4wYc0OMtJb9ItJgZLY92GGuvVRwn9e0D4lab84+x/Nn3XatQdqU69ev7da/bQCUeBivyEZo03olh56YxCKvNfG3ZYwwhMTn9Hg/EdnwrGHYHj0ZgfSR1+Dzvnk0WW/MRs0276Ojj5O0hhnlaAh5n97W6fgHldGKvdEafYeD602C1Zkd+ISqF13W56MWhtUhiUsdUHShnpM/EBOITg6mTDFP1i/qMS0PjRaCzBpdqpJIoKzQpsi4Z3QTHTZ7uK/lqOEaE/wqXHuYlMKcTuOuX33gIp28k=#012[192.168.122.108]*,[np0005532586.ctlplane.localdomain]*,[172.17.0.108]*,[np0005532586.internalapi.localdomain]*,[172.18.0.108]*,[np0005532586.storage.localdomain]*,[172.20.0.108]*,[np0005532586.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005532586.tenant.localdomain]*,[np0005532586.localdomain]*,[np0005532586]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD6U4JggC29IKqxQ7GjhK23AehQb1S2zLryOxLwLEs9rP0qOZpJ9wR1VsBNLXDCmoRVTsH2+3V00hmkvlanKUuzgmLO61hdur+5NQD0xHnY7lOLpOoyR7hJiMuHj/nRgBLWY2OB8Gim121dgfuc2zRF92igDYe65Uf0et83vWlgRmc7KlziaJ91iVcBUmhGYf3Ij7QxfhQH5TTnGoQizdiBpuP+yVuU2AepbvQ8ZFvzioCwzWAVu/xfdRFp9QyLT4JP1jM6dadTjD5RUAjRL6qR1tLXVq/rvqtXSL8ruBSYm3NCOys9RtdrNolZ7frd+zmvF+VzMNLtlRxiuy1ReR+ZO3felB+4TwfEfLZ+DqE1s3+ksCQH/sVCrxzFsRz5lamWG3p78ZBWTiQ/7WdJS1dQOHz+pKNSSW/NYMIqitxsCsEWPJLq/EWoHVxvjREucCb5YvWHPKOv5RLlbm5lSHFLuFVV8O3AAzD/3JsjTbKGOjJhmtxPCgEy7RPqtIUX90s=#012[192.168.122.103]*,[np0005532581.ctlplane.localdomain]*,[172.17.0.103]*,[np0005532581.internalapi.localdomain]*,[172.18.0.103]*,[np0005532581.storage.localdomain]*,[172.20.0.103]*,[np0005532581.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005532581.tenant.localdomain]*,[np0005532581.localdomain]*,[np0005532581]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRibSMIP5+E9lJWuaKDEuCaJoGhGPTqff+o8SP2Twk+NhPOa5FC7WQhHPLXVhKAtlCX60ckYE53Q/H/RVRZ55JdWQLSdY/1tQCD6c0Ry6N+UD+mxo9iN9cHk6vd6J5kJu+v/gBEmFY1A9pjzsD1CTR8gZJHZFqbUTzXrKkoUjK3Kqa8UtvzyhgYQtYIaUwaf1z7CMNQ3A4EaGVKyRsVwb11jlaT9fjB43E3tp9p5EG6PPJEGux/Xea6iHnhSwZHpkD/ylneDOkBbGvYKhL33bpXMcbuHy32jAFr+2Q07sKvgy/b5/f/nTgNCyxEIpoXUbEhX+Vlh+gycU7KJw6FRyR3dQFjooV97NQ/oov2VP9DnTObziZA8lhaJ20ChTfDVUyvFCFi3dKgBUPCeNWCGI69eNHu3dQcwCNJ3kANqhHdkYpBd00PVBritJfxfzH1DCLo0I9CSi1buWYhein9VHZWtzePv/+ucWERRIo+J04QPkV+6P6vgOTRl5U75RctJU=#012[192.168.122.104]*,[np0005532582.ctlplane.localdomain]*,[172.17.0.104]*,[np0005532582.internalapi.localdomain]*,[172.18.0.104]*,[np0005532582.storage.localdomain]*,[172.20.0.104]*,[np0005532582.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005532582.tenant.localdomain]*,[np0005532582.localdomain]*,[np0005532582]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0v47OVdr7YS/5xSUmMc7u26O7OwPomkdDR6s8rrcencbx7seRSeU00QGeRQcJJ023bD3xk26W8iiJTRUDkYSy//cSfHODdDy+CNEfDUTkGzIjiApoLi2b+S4J6wcAldMsj02MZmx67vUHyM5Qwok+22XqopryL8BiGPJbnoUcZy773f5OKPPMNuj3Fyb7jd5mrC7awK4NniZHyHPYBQeBa234HL42fRjcOqCcxuauy5cbz9PeBv5/kg+nYc8cY5qCyLqNhzMVRUa/PcepMBcfThk17LtPGzCYS7IR2cGdUDP6Pe0QD34Hu6+mpwKwYx73v5uHcmy9CeZ8fK83/F84Lr6jxsiwoU2e+hUfzVRq8gnkjk6kuL86eSM2POSGgBYYgCb+Ma6lOkF1MA+rLAh0gAsUhBgVlz6HtaMoDvLOi/NrQeoQyNE1Pv4vPAndmGGc8A7JCtmCMk9VvMy0Ht4IOvtDJFfx1lg7NuMIKqePYTEk56p8wTUNM+BmdJEhFPU=#012[192.168.122.105]*,[np0005532583.ctlplane.localdomain]*,[172.17.0.105]*,[np0005532583.internalapi.localdomain]*,[172.18.0.105]*,[np0005532583.storage.localdomain]*,[172.20.0.105]*,[np0005532583.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005532583.tenant.localdomain]*,[np0005532583.localdomain]*,[np0005532583]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkB1Cq8AQaEBYTlv5Hzs024jg//D6wieNnvsI5WcYj7wckm9vKTJQfUD6yZBMmyPw6+vVzsM16bj2hagkDR5wkO7uSIaMqWrcoQ1h9HkJQLK8QB0iuzUvQzdr22kUgkLII8thNHK4VxF4VhAKNmzqCofZ4ZSaLUMwauFCFUjx1VJISEZdgYRZ4+++wAN5bdK+WrwSOAHJYJWQX2pRRsPiunSdY1BOUKB3sp7IBcQ3MDJgnKlkR7tiGSYB2W8JsLvIsIb0I2EaqmPUTIzKUuxSJnWEls/WyDT9MNkjhobVeAyFZ5TEik4OvobUhVGJ8CsU7O101KQNQ3IywPM+V0UpjA1yK49z5Qs0LjApmqORsTcjOojYaKGr9n64dVjXdFOMwajB9UmMEFtlIngm6kx7mJQGXqYxVAscW34JY832iKOEzQWrUSdo6mVJ7TXhYYcbdFp+G/128SfhNrbHwKinHeE9Nqu48BR7bmRZXO7ef+UMY1dG3AIvFt4JwFvLihZc=#012 create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:48:53 localhost python3[38106]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.scypydvk' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:48:54 localhost python3[38124]: ansible-file Invoked with path=/tmp/ansible.scypydvk state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:48:55 localhost python3[38140]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 02:48:55 localhost python3[38156]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:48:56 localhost python3[38174]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:48:56 localhost python3[38193]: ansible-community.general.cloud_init_data_facts Invoked with filter=status Nov 23 02:48:58 localhost python3[38330]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:48:59 localhost python3[38347]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 02:49:02 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Nov 23 02:49:02 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Nov 23 02:49:02 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 02:49:02 localhost systemd[1]: Starting man-db-cache-update.service... Nov 23 02:49:02 localhost systemd[1]: Reloading. Nov 23 02:49:02 localhost systemd-rc-local-generator[38417]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:49:02 localhost systemd-sysv-generator[38422]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:49:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:49:03 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 23 02:49:03 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Nov 23 02:49:03 localhost systemd[1]: tuned.service: Deactivated successfully. Nov 23 02:49:03 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Nov 23 02:49:03 localhost systemd[1]: tuned.service: Consumed 1.596s CPU time. Nov 23 02:49:03 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Nov 23 02:49:03 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 23 02:49:03 localhost systemd[1]: Finished man-db-cache-update.service. Nov 23 02:49:03 localhost systemd[1]: run-rc1cf4a7955954385aff4fc21e3f79638.service: Deactivated successfully. Nov 23 02:49:04 localhost systemd[1]: Started Dynamic System Tuning Daemon. Nov 23 02:49:04 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 02:49:04 localhost systemd[1]: Starting man-db-cache-update.service... Nov 23 02:49:04 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 23 02:49:04 localhost systemd[1]: Finished man-db-cache-update.service. Nov 23 02:49:04 localhost systemd[1]: run-r96229ba60d8f4028bff502a2fa0bf08d.service: Deactivated successfully. Nov 23 02:49:05 localhost python3[38784]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 02:49:05 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Nov 23 02:49:05 localhost systemd[1]: tuned.service: Deactivated successfully. Nov 23 02:49:05 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Nov 23 02:49:05 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Nov 23 02:49:07 localhost systemd[1]: Started Dynamic System Tuning Daemon. Nov 23 02:49:07 localhost python3[38979]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:49:08 localhost python3[38996]: ansible-slurp Invoked with src=/etc/tuned/active_profile Nov 23 02:49:08 localhost python3[39012]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:49:09 localhost python3[39028]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:49:11 localhost python3[39048]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:49:11 localhost python3[39065]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:49:14 localhost python3[39081]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:49:18 localhost python3[39097]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:49:19 localhost python3[39145]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:49:19 localhost python3[39190]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884159.0798342-71266-24462484446961/source _original_basename=tmpcyvgv7ra follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:49:20 localhost python3[39220]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:49:21 localhost python3[39268]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:49:21 localhost python3[39311]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884160.6830516-71411-269801402274909/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=72c5ef7909b5cdbbb2310fa1b5c8d166a17f7155 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:49:21 localhost python3[39373]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:49:22 localhost python3[39416]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884161.6130075-71474-24651253692106/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=6552073e0e4bb04b7faeda3f8c2098edf889171a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:49:22 localhost python3[39478]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:49:23 localhost python3[39521]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884162.5283444-71474-135326625687377/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=1bc51567bc68ec6d87ea2fcfee756b886ebb9f92 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:49:23 localhost python3[39583]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:49:24 localhost python3[39626]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884163.4697943-71474-115229596303240/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=68b5a56a66cb10764ef3288009ad5e9b7e8faf12 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:49:24 localhost python3[39688]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:49:25 localhost python3[39731]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884164.4732473-71474-43107926286130/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:49:25 localhost python3[39793]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:49:26 localhost python3[39836]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884165.3449028-71474-96699971417017/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=4b5c93f5e19c772da8a7cefdaad08d891965d37c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:49:26 localhost python3[39898]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:49:26 localhost python3[39941]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884166.270804-71474-169082157549728/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:49:27 localhost python3[40003]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:49:27 localhost python3[40046]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884167.109685-71474-226532529058078/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=66f0a2c6a0832caadadc4d66bd975147c152464b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:49:28 localhost python3[40108]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:49:28 localhost python3[40151]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884167.9121165-71474-220327005027106/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:49:29 localhost python3[40213]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:49:29 localhost python3[40256]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884168.7936153-71474-30224468026287/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:49:29 localhost python3[40318]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:49:30 localhost python3[40361]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884169.6322863-71474-265019811633992/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=d194268468ecff87f91548ef9a00855e8c650f8d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:49:31 localhost python3[40391]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:49:31 localhost python3[40439]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:49:32 localhost python3[40511]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884171.4910398-72252-16069086385798/source _original_basename=tmp2ayd6tnb follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:49:36 localhost python3[40589]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 02:49:37 localhost python3[40650]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:49:42 localhost python3[40667]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:49:47 localhost python3[40684]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:49:48 localhost python3[40707]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:49:48 localhost python3[40730]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:49:49 localhost python3[40753]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:49:49 localhost python3[40776]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:50:01 localhost systemd[35974]: Starting Mark boot as successful... Nov 23 02:50:01 localhost systemd[35974]: Finished Mark boot as successful. Nov 23 02:50:31 localhost python3[40800]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:32 localhost python3[40848]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:32 localhost python3[40866]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmp_73ouoyq recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:32 localhost python3[40896]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:33 localhost python3[40974]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:33 localhost python3[41004]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:34 localhost python3[41086]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:34 localhost python3[41104]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:35 localhost python3[41166]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:35 localhost python3[41184]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:36 localhost python3[41246]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:36 localhost python3[41279]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:36 localhost python3[41341]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:37 localhost python3[41359]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:37 localhost python3[41421]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:37 localhost python3[41439]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:38 localhost python3[41501]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:38 localhost python3[41519]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:39 localhost python3[41581]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:39 localhost python3[41599]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:39 localhost python3[41661]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:40 localhost python3[41679]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:40 localhost python3[41741]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:40 localhost python3[41759]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:41 localhost python3[41821]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:41 localhost python3[41839]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:42 localhost python3[41869]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:50:43 localhost python3[41917]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:43 localhost python3[41935]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmpds2im1kb recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:45 localhost python3[41965]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 02:50:50 localhost python3[41982]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 02:50:52 localhost python3[42000]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 02:50:52 localhost python3[42018]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 02:50:52 localhost systemd[1]: Reloading. Nov 23 02:50:52 localhost systemd-rc-local-generator[42044]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:50:52 localhost systemd-sysv-generator[42050]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:50:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:50:53 localhost systemd[1]: Starting Netfilter Tables... Nov 23 02:50:53 localhost systemd[1]: Finished Netfilter Tables. Nov 23 02:50:53 localhost python3[42107]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:54 localhost python3[42150]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884253.4729195-75101-75314265098770/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:54 localhost python3[42180]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:50:55 localhost python3[42198]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:50:55 localhost python3[42247]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:56 localhost python3[42290]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884255.314207-75211-10854437390261/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:56 localhost python3[42352]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:57 localhost python3[42395]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884256.3152802-75277-21978887694802/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:57 localhost python3[42457]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:58 localhost python3[42500]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884257.3193488-75409-172847099687532/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:50:58 localhost python3[42562]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:50:58 localhost python3[42605]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884258.224741-75450-133272353038347/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:51:00 localhost python3[42667]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:51:00 localhost python3[42710]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884259.237534-75524-182631781773200/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:51:00 localhost python3[42740]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:51:01 localhost python3[42805]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/tripleo-chains.nft"#012include "/etc/nftables/tripleo-rules.nft"#012include "/etc/nftables/tripleo-jumps.nft"#012 state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:51:01 localhost python3[42822]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:51:02 localhost python3[42839]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:51:02 localhost python3[42858]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:03 localhost python3[42874]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:03 localhost python3[42890]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:03 localhost python3[42906]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Nov 23 02:51:04 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=7 res=1 Nov 23 02:51:05 localhost python3[42927]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Nov 23 02:51:05 localhost kernel: SELinux: Converting 2704 SID table entries... Nov 23 02:51:05 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 02:51:05 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 02:51:05 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 02:51:05 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 02:51:05 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 02:51:05 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 02:51:05 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 02:51:06 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=8 res=1 Nov 23 02:51:06 localhost python3[42948]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Nov 23 02:51:07 localhost kernel: SELinux: Converting 2704 SID table entries... Nov 23 02:51:07 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 02:51:07 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 02:51:07 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 02:51:07 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 02:51:07 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 02:51:07 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 02:51:07 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 02:51:07 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=9 res=1 Nov 23 02:51:07 localhost python3[42969]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Nov 23 02:51:08 localhost kernel: SELinux: Converting 2704 SID table entries... Nov 23 02:51:08 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 02:51:08 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 02:51:08 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 02:51:08 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 02:51:08 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 02:51:08 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 02:51:08 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 02:51:08 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=10 res=1 Nov 23 02:51:09 localhost python3[42990]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:09 localhost python3[43006]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:09 localhost python3[43022]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:09 localhost python3[43038]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:51:10 localhost python3[43054]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:51:11 localhost python3[43071]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 02:51:15 localhost python3[43088]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:15 localhost python3[43136]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:51:15 localhost python3[43179]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884275.164153-76393-1160896750019/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:16 localhost python3[43209]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 02:51:17 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 02:51:17 localhost systemd[1]: Stopped Load Kernel Modules. Nov 23 02:51:17 localhost systemd[1]: Stopping Load Kernel Modules... Nov 23 02:51:17 localhost systemd[1]: Starting Load Kernel Modules... Nov 23 02:51:17 localhost kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 02:51:17 localhost kernel: Bridge firewalling registered Nov 23 02:51:17 localhost systemd-modules-load[43212]: Inserted module 'br_netfilter' Nov 23 02:51:17 localhost systemd-modules-load[43212]: Module 'msr' is built in Nov 23 02:51:17 localhost systemd[1]: Finished Load Kernel Modules. Nov 23 02:51:17 localhost python3[43263]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:51:18 localhost python3[43306]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884277.6337657-76449-110141499736772/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:18 localhost python3[43336]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:19 localhost python3[43353]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:19 localhost python3[43371]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:19 localhost python3[43389]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:19 localhost python3[43406]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:20 localhost python3[43423]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:20 localhost python3[43440]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:20 localhost python3[43458]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:21 localhost python3[43476]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:21 localhost python3[43494]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:21 localhost python3[43512]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:22 localhost python3[43530]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:22 localhost python3[43548]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:22 localhost python3[43566]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:22 localhost python3[43583]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:23 localhost python3[43600]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:23 localhost python3[43617]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:23 localhost python3[43634]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 23 02:51:24 localhost python3[43652]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 02:51:24 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 02:51:24 localhost systemd[1]: Stopped Apply Kernel Variables. Nov 23 02:51:24 localhost systemd[1]: Stopping Apply Kernel Variables... Nov 23 02:51:24 localhost systemd[1]: Starting Apply Kernel Variables... Nov 23 02:51:24 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 02:51:24 localhost systemd[1]: Finished Apply Kernel Variables. Nov 23 02:51:24 localhost python3[43672]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:25 localhost python3[43688]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:25 localhost python3[43704]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:25 localhost python3[43720]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:51:26 localhost python3[43736]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:26 localhost python3[43752]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:26 localhost python3[43768]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:26 localhost python3[43784]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:27 localhost python3[43800]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:51:27 localhost python3[43848]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:51:28 localhost python3[43891]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884287.4769132-76851-202896387491506/source _original_basename=tmpki30xcv2 follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:51:28 localhost python3[43921]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:51:30 localhost python3[43938]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:51:30 localhost python3[43986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:51:31 localhost python3[44029]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884290.523989-77095-166263971294924/source _original_basename=tmppq2wybd2 follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:51:31 localhost python3[44059]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:32 localhost python3[44075]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:32 localhost python3[44091]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:32 localhost python3[44107]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:32 localhost python3[44123]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:33 localhost python3[44139]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:51:33 localhost python3[44155]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:33 localhost python3[44171]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:51:34 localhost python3[44187]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:34 localhost python3[44203]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False Nov 23 02:51:34 localhost python3[44225]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005532584.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Nov 23 02:51:35 localhost python3[44249]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None Nov 23 02:51:35 localhost python3[44265]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:51:36 localhost python3[44314]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:51:36 localhost python3[44385]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884295.917274-77401-232259226547654/source _original_basename=tmptvbc_qrl follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:51:37 localhost python3[44434]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Nov 23 02:51:37 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=11 res=1 Nov 23 02:51:38 localhost python3[44521]: ansible-file Invoked with path=/etc/crypto-policies/local.d/gnutls-qemu.config state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:51:38 localhost python3[44552]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:38 localhost python3[44568]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False Nov 23 02:51:40 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=12 res=1 Nov 23 02:51:40 localhost python3[44588]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 02:51:43 localhost python3[44605]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 02:51:44 localhost python3[44666]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:51:44 localhost python3[44682]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:51:45 localhost python3[44741]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:51:45 localhost python3[44784]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884304.7547011-77852-113595697466204/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=39daaced885041ee369572002097d7764c28980a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:51:46 localhost python3[44846]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:51:46 localhost python3[44891]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884305.7763255-77904-90297691793538/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:46 localhost python3[44921]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:47 localhost python3[44937]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:47 localhost python3[44953]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:47 localhost python3[44969]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:48 localhost python3[45017]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:51:48 localhost python3[45060]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884308.1472507-78015-91862668493061/source _original_basename=tmpffrqznbm follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:51:49 localhost python3[45090]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:49 localhost python3[45106]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:51:50 localhost python3[45122]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 02:51:53 localhost python3[45171]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:51:53 localhost python3[45216]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884313.25547-78240-215115043440792/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:51:54 localhost python3[45247]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 02:51:54 localhost systemd[1]: Stopping OpenSSH server daemon... Nov 23 02:51:54 localhost systemd[1]: sshd.service: Deactivated successfully. Nov 23 02:51:54 localhost systemd[1]: Stopped OpenSSH server daemon. Nov 23 02:51:54 localhost systemd[1]: sshd.service: Consumed 2.361s CPU time, read 2.5M from disk, written 24.0K to disk. Nov 23 02:51:54 localhost systemd[1]: Stopped target sshd-keygen.target. Nov 23 02:51:54 localhost systemd[1]: Stopping sshd-keygen.target... Nov 23 02:51:54 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 23 02:51:54 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 23 02:51:54 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 23 02:51:54 localhost systemd[1]: Reached target sshd-keygen.target. Nov 23 02:51:54 localhost systemd[1]: Starting OpenSSH server daemon... Nov 23 02:51:54 localhost sshd[45251]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:51:54 localhost systemd[1]: Started OpenSSH server daemon. Nov 23 02:51:55 localhost python3[45267]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:51:56 localhost python3[45285]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:51:56 localhost python3[45303]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 02:51:59 localhost python3[45352]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:52:00 localhost python3[45370]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:52:01 localhost python3[45400]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 02:52:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 02:52:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3405 writes, 16K keys, 3405 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s#012Cumulative WAL: 3405 writes, 206 syncs, 16.53 writes per sync, written: 0.01 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3405 writes, 16K keys, 3405 commit groups, 1.0 writes per commit group, ingest: 15.29 MB, 0.03 MB/s#012Interval WAL: 3405 writes, 206 syncs, 16.53 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f46733610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f46733610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Nov 23 02:52:02 localhost python3[45450]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:52:02 localhost python3[45468]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:52:02 localhost python3[45498]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 02:52:02 localhost systemd[1]: Reloading. Nov 23 02:52:02 localhost systemd-rc-local-generator[45519]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:52:02 localhost systemd-sysv-generator[45524]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:52:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:52:03 localhost systemd[1]: Starting chronyd online sources service... Nov 23 02:52:03 localhost chronyc[45538]: 200 OK Nov 23 02:52:03 localhost systemd[1]: chrony-online.service: Deactivated successfully. Nov 23 02:52:03 localhost systemd[1]: Finished chronyd online sources service. Nov 23 02:52:03 localhost python3[45554]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:52:03 localhost chronyd[25679]: System clock was stepped by 0.000079 seconds Nov 23 02:52:03 localhost python3[45571]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:52:04 localhost python3[45588]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:52:04 localhost chronyd[25679]: System clock was stepped by 0.000000 seconds Nov 23 02:52:04 localhost python3[45605]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:52:05 localhost python3[45622]: ansible-timezone Invoked with name=UTC hwclock=None Nov 23 02:52:05 localhost systemd[1]: Starting Time & Date Service... Nov 23 02:52:05 localhost systemd[1]: Started Time & Date Service. Nov 23 02:52:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 02:52:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3247 writes, 16K keys, 3247 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 3247 writes, 139 syncs, 23.36 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3247 writes, 16K keys, 3247 commit groups, 1.0 writes per commit group, ingest: 14.62 MB, 0.02 MB/s#012Interval WAL: 3247 writes, 139 syncs, 23.36 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56035b7962d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56035b7962d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Nov 23 02:52:06 localhost python3[45642]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:52:07 localhost python3[45659]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:52:07 localhost python3[45676]: ansible-slurp Invoked with src=/etc/tuned/active_profile Nov 23 02:52:08 localhost python3[45692]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:52:08 localhost python3[45708]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:52:09 localhost python3[45724]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:52:09 localhost python3[45772]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:52:09 localhost python3[45815]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884329.1771598-79234-25282617212745/source _original_basename=tmpfr503p77 follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:52:10 localhost python3[45877]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:52:10 localhost python3[45920]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884330.1144576-79435-140574302453139/source _original_basename=tmp3971r_jm follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:52:11 localhost python3[45950]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Nov 23 02:52:11 localhost systemd[1]: Reloading. Nov 23 02:52:11 localhost systemd-rc-local-generator[45975]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:52:11 localhost systemd-sysv-generator[45982]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:52:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:52:12 localhost python3[46004]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:52:12 localhost python3[46020]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:52:12 localhost python3[46037]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:52:12 localhost systemd[1]: run-netns-ns_temp.mount: Deactivated successfully. Nov 23 02:52:12 localhost systemd[35974]: Created slice User Background Tasks Slice. Nov 23 02:52:12 localhost systemd[35974]: Starting Cleanup of User's Temporary Files and Directories... Nov 23 02:52:12 localhost systemd[35974]: Finished Cleanup of User's Temporary Files and Directories. Nov 23 02:52:13 localhost python3[46055]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:52:13 localhost python3[46071]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:52:14 localhost python3[46119]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:52:14 localhost python3[46162]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884333.795925-79623-200737199596357/source _original_basename=tmp9ttqosqk follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:52:35 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Nov 23 02:52:37 localhost python3[46194]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Nov 23 02:52:38 localhost python3[46210]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None Nov 23 02:52:38 localhost python3[46239]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 23 02:52:38 localhost python3[46272]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:52:39 localhost python3[46321]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:52:39 localhost python3[46337]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Nov 23 02:52:40 localhost kernel: SELinux: Converting 2707 SID table entries... Nov 23 02:52:40 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 02:52:40 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 02:52:40 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 02:52:40 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 02:52:40 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 02:52:40 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 02:52:40 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 02:52:40 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=13 res=1 Nov 23 02:52:40 localhost python3[46375]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:52:42 localhost python3[46512]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': [ Nov 23 02:52:43 localhost rsyslogd[759]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ] Nov 23 02:52:43 localhost python3[46528]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 23 02:52:43 localhost python3[46544]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Nov 23 02:52:44 localhost python3[46560]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}} Nov 23 02:52:49 localhost python3[46608]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:52:49 localhost python3[46651]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884369.0237849-81100-81865775708529/source _original_basename=tmp84mcqafp follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:52:50 localhost python3[46681]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:52:52 localhost python3[46804]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 23 02:52:54 localhost python3[46925]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Nov 23 02:52:56 localhost python3[46941]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:52:57 localhost python3[46958]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 02:53:00 localhost dbus-broker-launch[18422]: Noticed file-system modification, trigger reload. Nov 23 02:53:00 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Nov 23 02:53:00 localhost dbus-broker-launch[18422]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Nov 23 02:53:00 localhost dbus-broker-launch[18422]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Nov 23 02:53:01 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Nov 23 02:53:01 localhost systemd[1]: Reexecuting. Nov 23 02:53:01 localhost systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 23 02:53:01 localhost systemd[1]: Detected virtualization kvm. Nov 23 02:53:01 localhost systemd[1]: Detected architecture x86-64. Nov 23 02:53:01 localhost systemd-sysv-generator[47017]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:53:01 localhost systemd-rc-local-generator[47013]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:53:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:53:10 localhost kernel: SELinux: Converting 2707 SID table entries... Nov 23 02:53:10 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 02:53:10 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 02:53:10 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 02:53:10 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 02:53:10 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 02:53:10 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 02:53:10 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 02:53:10 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Nov 23 02:53:10 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=14 res=1 Nov 23 02:53:10 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Nov 23 02:53:11 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 02:53:11 localhost systemd[1]: Starting man-db-cache-update.service... Nov 23 02:53:11 localhost systemd[1]: Reloading. Nov 23 02:53:11 localhost systemd-rc-local-generator[47130]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:53:11 localhost systemd-sysv-generator[47134]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:53:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:53:12 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 02:53:12 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 23 02:53:12 localhost systemd-journald[617]: Journal stopped Nov 23 02:53:12 localhost systemd[1]: Stopping Journal Service... Nov 23 02:53:12 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Nov 23 02:53:12 localhost systemd-journald[617]: Received SIGTERM from PID 1 (systemd). Nov 23 02:53:12 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 02:53:12 localhost systemd[1]: Stopped Journal Service. Nov 23 02:53:12 localhost systemd[1]: systemd-journald.service: Consumed 1.693s CPU time. Nov 23 02:53:12 localhost systemd[1]: Starting Journal Service... Nov 23 02:53:12 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 02:53:12 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Nov 23 02:53:12 localhost systemd[1]: systemd-udevd.service: Consumed 2.822s CPU time. Nov 23 02:53:12 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Nov 23 02:53:12 localhost systemd-journald[47422]: Journal started Nov 23 02:53:12 localhost systemd-journald[47422]: Runtime Journal (/run/log/journal/6e0090cd4cf296f54418e234b90f721c) is 12.1M, max 314.7M, 302.6M free. Nov 23 02:53:12 localhost systemd[1]: Started Journal Service. Nov 23 02:53:12 localhost systemd-journald[47422]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Nov 23 02:53:12 localhost systemd-journald[47422]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 23 02:53:12 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 02:53:12 localhost systemd-udevd[47426]: Using default interface naming scheme 'rhel-9.0'. Nov 23 02:53:12 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Nov 23 02:53:12 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 02:53:12 localhost systemd[1]: Reloading. Nov 23 02:53:12 localhost systemd-rc-local-generator[47913]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:53:12 localhost systemd-sysv-generator[47919]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:53:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:53:12 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 23 02:53:13 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 23 02:53:13 localhost systemd[1]: Finished man-db-cache-update.service. Nov 23 02:53:13 localhost systemd[1]: man-db-cache-update.service: Consumed 1.625s CPU time. Nov 23 02:53:13 localhost systemd[1]: run-ra4f383c11e6e4a8284beddac1781afa4.service: Deactivated successfully. Nov 23 02:53:13 localhost systemd[1]: run-r1713c5a8c61f44cfa03d4188d0c89f5b.service: Deactivated successfully. Nov 23 02:53:14 localhost python3[48448]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False Nov 23 02:53:15 localhost python3[48467]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 02:53:16 localhost python3[48485]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 23 02:53:16 localhost python3[48485]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json Nov 23 02:53:16 localhost python3[48485]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false Nov 23 02:53:23 localhost podman[48499]: 2025-11-23 07:53:16.329961537 +0000 UTC m=+0.045570912 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Nov 23 02:53:23 localhost python3[48485]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect bac901955dcf7a32a493c6ef724c092009bbc18467858aa8c55e916b8c2b2b8f --format json Nov 23 02:53:23 localhost python3[48599]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 23 02:53:23 localhost python3[48599]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json Nov 23 02:53:24 localhost python3[48599]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false Nov 23 02:53:31 localhost podman[48612]: 2025-11-23 07:53:24.075098367 +0000 UTC m=+0.029596424 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Nov 23 02:53:31 localhost python3[48599]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 44feaf8d87c1d40487578230316b622680576d805efdb45dfeea6aad464b41f1 --format json Nov 23 02:53:31 localhost python3[48713]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 23 02:53:31 localhost python3[48713]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json Nov 23 02:53:31 localhost python3[48713]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false Nov 23 02:53:47 localhost podman[48726]: 2025-11-23 07:53:31.726446344 +0000 UTC m=+0.035053648 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 02:53:47 localhost python3[48713]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 3a088c12511c977065fdc5f1594cba7b1a79f163578a6ffd0ac4a475b8e67938 --format json Nov 23 02:53:47 localhost podman[49142]: 2025-11-23 07:53:47.523041393 +0000 UTC m=+0.122499382 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, release=553, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7) Nov 23 02:53:47 localhost podman[49142]: 2025-11-23 07:53:47.630959826 +0000 UTC m=+0.230417825 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, distribution-scope=public, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, ceph=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, version=7, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553) Nov 23 02:53:47 localhost python3[49184]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 23 02:53:47 localhost python3[49184]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json Nov 23 02:53:47 localhost python3[49184]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false Nov 23 02:54:00 localhost podman[49239]: 2025-11-23 07:53:47.889574647 +0000 UTC m=+0.031326252 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 23 02:54:00 localhost python3[49184]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 514d439186251360cf734cbc6d4a44c834664891872edf3798a653dfaacf10c0 --format json Nov 23 02:54:00 localhost python3[49486]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 23 02:54:00 localhost python3[49486]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json Nov 23 02:54:00 localhost python3[49486]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false Nov 23 02:54:08 localhost podman[49500]: 2025-11-23 07:54:00.830161009 +0000 UTC m=+0.046198022 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Nov 23 02:54:08 localhost python3[49486]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a9dd7a2ac6f35cb086249f87f74e2f8e74e7e2ad5141ce2228263be6faedce26 --format json Nov 23 02:54:09 localhost python3[49826]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 23 02:54:09 localhost python3[49826]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json Nov 23 02:54:09 localhost python3[49826]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false Nov 23 02:54:13 localhost podman[49838]: 2025-11-23 07:54:09.377405528 +0000 UTC m=+0.043257023 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Nov 23 02:54:13 localhost python3[49826]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 24976907b2c2553304119aba5731a800204d664feed24ca9eb7f2b4c7d81016b --format json Nov 23 02:54:13 localhost python3[49916]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 23 02:54:13 localhost python3[49916]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json Nov 23 02:54:14 localhost python3[49916]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false Nov 23 02:54:16 localhost podman[49927]: 2025-11-23 07:54:14.075113989 +0000 UTC m=+0.047210546 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Nov 23 02:54:16 localhost python3[49916]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 57163a7b21fdbb804a27897cb6e6052a5e5c7a339c45d663e80b52375a760dcf --format json Nov 23 02:54:16 localhost python3[50006]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 23 02:54:16 localhost python3[50006]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json Nov 23 02:54:16 localhost python3[50006]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false Nov 23 02:54:18 localhost podman[50018]: 2025-11-23 07:54:16.637274434 +0000 UTC m=+0.048930043 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Nov 23 02:54:18 localhost python3[50006]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 076d82a27d63c8328729ed27ceb4291585ae18d017befe6fe353df7aa11715ae --format json Nov 23 02:54:18 localhost python3[50096]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 23 02:54:18 localhost python3[50096]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json Nov 23 02:54:19 localhost python3[50096]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false Nov 23 02:54:21 localhost podman[50109]: 2025-11-23 07:54:19.098385507 +0000 UTC m=+0.048177289 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Nov 23 02:54:21 localhost python3[50096]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect d0dbcb95546840a8d088df044347a7877ad5ea45a2ddba0578e9bb5de4ab0da5 --format json Nov 23 02:54:21 localhost python3[50187]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 23 02:54:21 localhost python3[50187]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json Nov 23 02:54:21 localhost python3[50187]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false Nov 23 02:54:25 localhost podman[50199]: 2025-11-23 07:54:21.72718503 +0000 UTC m=+0.044769054 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Nov 23 02:54:25 localhost python3[50187]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect e6e981540e553415b2d6eda490d7683db07164af2e7a0af8245623900338a4d6 --format json Nov 23 02:54:26 localhost python3[50288]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 23 02:54:26 localhost python3[50288]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json Nov 23 02:54:26 localhost python3[50288]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false Nov 23 02:54:28 localhost podman[50301]: 2025-11-23 07:54:26.147817629 +0000 UTC m=+0.048425516 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Nov 23 02:54:28 localhost python3[50288]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 87ee88cbf01fb42e0b22747072843bcca6130a90eda4de6e74b3ccd847bb4040 --format json Nov 23 02:54:29 localhost python3[50378]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:54:30 localhost ansible-async_wrapper.py[50550]: Invoked with 713686786093 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884470.2580166-83801-209385319464135/AnsiballZ_command.py _ Nov 23 02:54:30 localhost ansible-async_wrapper.py[50553]: Starting module and watcher Nov 23 02:54:30 localhost ansible-async_wrapper.py[50553]: Start watching 50554 (3600) Nov 23 02:54:30 localhost ansible-async_wrapper.py[50554]: Start module (50554) Nov 23 02:54:30 localhost ansible-async_wrapper.py[50550]: Return async_wrapper task started. Nov 23 02:54:31 localhost python3[50574]: ansible-ansible.legacy.async_status Invoked with jid=713686786093.50550 mode=status _async_dir=/tmp/.ansible_async Nov 23 02:54:34 localhost puppet-user[50573]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 23 02:54:34 localhost puppet-user[50573]: (file: /etc/puppet/hiera.yaml) Nov 23 02:54:34 localhost puppet-user[50573]: Warning: Undefined variable '::deploy_config_name'; Nov 23 02:54:34 localhost puppet-user[50573]: (file & line not available) Nov 23 02:54:34 localhost puppet-user[50573]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 23 02:54:34 localhost puppet-user[50573]: (file & line not available) Nov 23 02:54:34 localhost puppet-user[50573]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Nov 23 02:54:34 localhost puppet-user[50573]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Nov 23 02:54:34 localhost puppet-user[50573]: Notice: Compiled catalog for np0005532584.localdomain in environment production in 0.13 seconds Nov 23 02:54:34 localhost puppet-user[50573]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully Nov 23 02:54:34 localhost puppet-user[50573]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created Nov 23 02:54:35 localhost puppet-user[50573]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully Nov 23 02:54:35 localhost puppet-user[50573]: Notice: Applied catalog in 0.06 seconds Nov 23 02:54:35 localhost puppet-user[50573]: Application: Nov 23 02:54:35 localhost puppet-user[50573]: Initial environment: production Nov 23 02:54:35 localhost puppet-user[50573]: Converged environment: production Nov 23 02:54:35 localhost puppet-user[50573]: Run mode: user Nov 23 02:54:35 localhost puppet-user[50573]: Changes: Nov 23 02:54:35 localhost puppet-user[50573]: Total: 3 Nov 23 02:54:35 localhost puppet-user[50573]: Events: Nov 23 02:54:35 localhost puppet-user[50573]: Success: 3 Nov 23 02:54:35 localhost puppet-user[50573]: Total: 3 Nov 23 02:54:35 localhost puppet-user[50573]: Resources: Nov 23 02:54:35 localhost puppet-user[50573]: Changed: 3 Nov 23 02:54:35 localhost puppet-user[50573]: Out of sync: 3 Nov 23 02:54:35 localhost puppet-user[50573]: Total: 10 Nov 23 02:54:35 localhost puppet-user[50573]: Time: Nov 23 02:54:35 localhost puppet-user[50573]: Schedule: 0.00 Nov 23 02:54:35 localhost puppet-user[50573]: File: 0.00 Nov 23 02:54:35 localhost puppet-user[50573]: Exec: 0.02 Nov 23 02:54:35 localhost puppet-user[50573]: Augeas: 0.03 Nov 23 02:54:35 localhost puppet-user[50573]: Transaction evaluation: 0.06 Nov 23 02:54:35 localhost puppet-user[50573]: Catalog application: 0.06 Nov 23 02:54:35 localhost puppet-user[50573]: Config retrieval: 0.17 Nov 23 02:54:35 localhost puppet-user[50573]: Last run: 1763884475 Nov 23 02:54:35 localhost puppet-user[50573]: Filebucket: 0.00 Nov 23 02:54:35 localhost puppet-user[50573]: Total: 0.06 Nov 23 02:54:35 localhost puppet-user[50573]: Version: Nov 23 02:54:35 localhost puppet-user[50573]: Config: 1763884474 Nov 23 02:54:35 localhost puppet-user[50573]: Puppet: 7.10.0 Nov 23 02:54:35 localhost ansible-async_wrapper.py[50554]: Module complete (50554) Nov 23 02:54:35 localhost ansible-async_wrapper.py[50553]: Done in kid B. Nov 23 02:54:41 localhost python3[51043]: ansible-ansible.legacy.async_status Invoked with jid=713686786093.50550 mode=status _async_dir=/tmp/.ansible_async Nov 23 02:54:42 localhost python3[51059]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 23 02:54:42 localhost python3[51075]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:54:42 localhost python3[51123]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:54:43 localhost python3[51166]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884482.636073-84086-79449993022619/source _original_basename=tmp8tfqzmh_ follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 23 02:54:43 localhost python3[51196]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:54:44 localhost python3[51299]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Nov 23 02:54:45 localhost python3[51318]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 02:54:45 localhost python3[51334]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005532584 step=1 update_config_hash_only=False Nov 23 02:54:46 localhost python3[51350]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:54:47 localhost python3[51504]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Nov 23 02:54:47 localhost python3[51529]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Nov 23 02:54:48 localhost python3[51569]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False Nov 23 02:54:49 localhost podman[51762]: 2025-11-23 07:54:49.02716839 +0000 UTC m=+0.078980812 container create e89c4e82a237d95d904dd08fec3f316f7ee6ea094e7680d228ab979a3ca3311f (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, container_name=container-puppet-collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc.) Nov 23 02:54:49 localhost podman[51749]: 2025-11-23 07:54:49.035685087 +0000 UTC m=+0.109452129 container create 656794b2420cdc8e7dd034fa0960809fae37a3927adf0b59ae4e0d75c71be8fb (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, managed_by=tripleo_ansible, container_name=container-puppet-crond, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 02:54:49 localhost podman[51776]: 2025-11-23 07:54:49.047067835 +0000 UTC m=+0.086498088 container create 20e8608d5f0304545f7e90af7716e976913763acdffaf653a31a667fdda9aa1a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=container-puppet-nova_libvirt, name=rhosp17/openstack-nova-libvirt, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 02:54:49 localhost podman[51748]: 2025-11-23 07:54:48.957071638 +0000 UTC m=+0.037765687 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Nov 23 02:54:49 localhost systemd[1]: Started libpod-conmon-e89c4e82a237d95d904dd08fec3f316f7ee6ea094e7680d228ab979a3ca3311f.scope. Nov 23 02:54:49 localhost systemd[1]: Started libpod-conmon-20e8608d5f0304545f7e90af7716e976913763acdffaf653a31a667fdda9aa1a.scope. Nov 23 02:54:49 localhost podman[51749]: 2025-11-23 07:54:48.990917701 +0000 UTC m=+0.064684773 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Nov 23 02:54:49 localhost systemd[1]: Started libcrun container. Nov 23 02:54:49 localhost podman[51776]: 2025-11-23 07:54:48.996139105 +0000 UTC m=+0.035569368 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 02:54:49 localhost podman[51762]: 2025-11-23 07:54:48.999406148 +0000 UTC m=+0.051218590 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Nov 23 02:54:49 localhost podman[51774]: 2025-11-23 07:54:49.000053049 +0000 UTC m=+0.040339499 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Nov 23 02:54:49 localhost systemd[1]: Started libcrun container. Nov 23 02:54:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7bd885aaea36c7d0396504cf30e2cbd3831af3abfb23178a603b6dde37fd5f/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 23 02:54:49 localhost podman[51748]: 2025-11-23 07:54:49.103487477 +0000 UTC m=+0.184181526 container create 8e342098f4aa036d8b0453aec18c4b9f094cb2c768c353c3511ff40cf23d31ec (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_puppet_step1, io.openshift.expose-services=, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 23 02:54:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf48a48241a85605ccaaed1dc6be1d7729c38cf732d2c362c3403cf7e38c508/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 23 02:54:49 localhost systemd[1]: Started libpod-conmon-656794b2420cdc8e7dd034fa0960809fae37a3927adf0b59ae4e0d75c71be8fb.scope. Nov 23 02:54:49 localhost podman[51762]: 2025-11-23 07:54:49.115491044 +0000 UTC m=+0.167303476 container init e89c4e82a237d95d904dd08fec3f316f7ee6ea094e7680d228ab979a3ca3311f (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, release=1761123044, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=container-puppet-collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, distribution-scope=public, managed_by=tripleo_ansible) Nov 23 02:54:49 localhost systemd[1]: Started libpod-conmon-8e342098f4aa036d8b0453aec18c4b9f094cb2c768c353c3511ff40cf23d31ec.scope. Nov 23 02:54:49 localhost systemd[1]: Started libcrun container. Nov 23 02:54:49 localhost podman[51762]: 2025-11-23 07:54:49.126071696 +0000 UTC m=+0.177884128 container start e89c4e82a237d95d904dd08fec3f316f7ee6ea094e7680d228ab979a3ca3311f (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=container-puppet-collectd, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, distribution-scope=public, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, tcib_managed=true) Nov 23 02:54:49 localhost podman[51762]: 2025-11-23 07:54:49.126458739 +0000 UTC m=+0.178271201 container attach e89c4e82a237d95d904dd08fec3f316f7ee6ea094e7680d228ab979a3ca3311f (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_puppet_step1, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=container-puppet-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 02:54:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5f6f274beb3204479b30adda5eae6174b870ceb64a77b37313304db216ec22a/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 23 02:54:49 localhost systemd[1]: Started libcrun container. Nov 23 02:54:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89da295cc978178bcac81850ec3c33a266a0d96aba327524b68608394214d41a/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 23 02:54:49 localhost podman[51749]: 2025-11-23 07:54:49.132484318 +0000 UTC m=+0.206251370 container init 656794b2420cdc8e7dd034fa0960809fae37a3927adf0b59ae4e0d75c71be8fb (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_puppet_step1, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, container_name=container-puppet-crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4) Nov 23 02:54:49 localhost podman[51748]: 2025-11-23 07:54:49.141974026 +0000 UTC m=+0.222668135 container init 8e342098f4aa036d8b0453aec18c4b9f094cb2c768c353c3511ff40cf23d31ec (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, container_name=container-puppet-metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public) Nov 23 02:54:49 localhost podman[51748]: 2025-11-23 07:54:49.149667208 +0000 UTC m=+0.230361297 container start 8e342098f4aa036d8b0453aec18c4b9f094cb2c768c353c3511ff40cf23d31ec (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, container_name=container-puppet-metrics_qdr, build-date=2025-11-18T22:49:46Z, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, config_id=tripleo_puppet_step1) Nov 23 02:54:49 localhost podman[51748]: 2025-11-23 07:54:49.15008369 +0000 UTC m=+0.230777769 container attach 8e342098f4aa036d8b0453aec18c4b9f094cb2c768c353c3511ff40cf23d31ec (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, build-date=2025-11-18T22:49:46Z, vcs-type=git, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, container_name=container-puppet-metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 02:54:49 localhost podman[51774]: 2025-11-23 07:54:49.701867982 +0000 UTC m=+0.742154422 container create 4a4a0ffd3bb03912fba6f6708c66612cdb602e73e5bdedf4fccb64b352501442 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_puppet_step1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T23:44:13Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, container_name=container-puppet-iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Nov 23 02:54:50 localhost podman[51776]: 2025-11-23 07:54:50.078946107 +0000 UTC m=+1.118376410 container init 20e8608d5f0304545f7e90af7716e976913763acdffaf653a31a667fdda9aa1a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, container_name=container-puppet-nova_libvirt) Nov 23 02:54:50 localhost systemd[1]: Started libpod-conmon-4a4a0ffd3bb03912fba6f6708c66612cdb602e73e5bdedf4fccb64b352501442.scope. Nov 23 02:54:50 localhost podman[51776]: 2025-11-23 07:54:50.097013774 +0000 UTC m=+1.136444057 container start 20e8608d5f0304545f7e90af7716e976913763acdffaf653a31a667fdda9aa1a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=container-puppet-nova_libvirt, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_puppet_step1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z) Nov 23 02:54:50 localhost podman[51776]: 2025-11-23 07:54:50.097345924 +0000 UTC m=+1.136776207 container attach 20e8608d5f0304545f7e90af7716e976913763acdffaf653a31a667fdda9aa1a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, container_name=container-puppet-nova_libvirt, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt) Nov 23 02:54:50 localhost systemd[1]: Started libcrun container. Nov 23 02:54:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a78c4dbe79ec91dd17b016c5deea447203862889bb5c5908b4f002506791c6/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff) Nov 23 02:54:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a78c4dbe79ec91dd17b016c5deea447203862889bb5c5908b4f002506791c6/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 23 02:54:50 localhost podman[51774]: 2025-11-23 07:54:50.1249227 +0000 UTC m=+1.165209150 container init 4a4a0ffd3bb03912fba6f6708c66612cdb602e73e5bdedf4fccb64b352501442 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, container_name=container-puppet-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1) Nov 23 02:54:50 localhost podman[51774]: 2025-11-23 07:54:50.13860756 +0000 UTC m=+1.178894000 container start 4a4a0ffd3bb03912fba6f6708c66612cdb602e73e5bdedf4fccb64b352501442 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=container-puppet-iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_puppet_step1, version=17.1.12, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 02:54:50 localhost podman[51774]: 2025-11-23 07:54:50.139319593 +0000 UTC m=+1.179606043 container attach 4a4a0ffd3bb03912fba6f6708c66612cdb602e73e5bdedf4fccb64b352501442 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, url=https://www.redhat.com, vcs-type=git, container_name=container-puppet-iscsid, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 02:54:50 localhost podman[51749]: 2025-11-23 07:54:50.1500605 +0000 UTC m=+1.223827572 container start 656794b2420cdc8e7dd034fa0960809fae37a3927adf0b59ae4e0d75c71be8fb (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=container-puppet-crond, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, vcs-type=git, architecture=x86_64) Nov 23 02:54:50 localhost podman[51749]: 2025-11-23 07:54:50.15039211 +0000 UTC m=+1.224159252 container attach 656794b2420cdc8e7dd034fa0960809fae37a3927adf0b59ae4e0d75c71be8fb (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, container_name=container-puppet-crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true) Nov 23 02:54:51 localhost podman[51634]: 2025-11-23 07:54:48.861284099 +0000 UTC m=+0.055440552 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Nov 23 02:54:51 localhost podman[52053]: 2025-11-23 07:54:51.56265927 +0000 UTC m=+0.079937052 container create 6d4e406f786a71969aa6dba08d95244d156890d689e60a731aae0cb86832aadb (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, container_name=container-puppet-ceilometer, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, name=rhosp17/openstack-ceilometer-central, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-19T00:11:59Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_puppet_step1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-central-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git) Nov 23 02:54:51 localhost systemd[1]: Started libpod-conmon-6d4e406f786a71969aa6dba08d95244d156890d689e60a731aae0cb86832aadb.scope. Nov 23 02:54:51 localhost systemd[1]: Started libcrun container. Nov 23 02:54:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610a23939626ba33bbc4ff5fdde23dbb8b9d397a6884923c98e37f79be82869c/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 23 02:54:51 localhost podman[52053]: 2025-11-23 07:54:51.514570279 +0000 UTC m=+0.031848131 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Nov 23 02:54:51 localhost podman[52053]: 2025-11-23 07:54:51.617319377 +0000 UTC m=+0.134597209 container init 6d4e406f786a71969aa6dba08d95244d156890d689e60a731aae0cb86832aadb (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:59Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, distribution-scope=public, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, name=rhosp17/openstack-ceilometer-central, container_name=container-puppet-ceilometer, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-central-container) Nov 23 02:54:51 localhost podman[52053]: 2025-11-23 07:54:51.632219975 +0000 UTC m=+0.149497747 container start 6d4e406f786a71969aa6dba08d95244d156890d689e60a731aae0cb86832aadb (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-central, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=container-puppet-ceilometer, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_puppet_step1, architecture=x86_64, name=rhosp17/openstack-ceilometer-central, batch=17.1_20251118.1, build-date=2025-11-19T00:11:59Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-central-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central) Nov 23 02:54:51 localhost podman[52053]: 2025-11-23 07:54:51.632417371 +0000 UTC m=+0.149695173 container attach 6d4e406f786a71969aa6dba08d95244d156890d689e60a731aae0cb86832aadb (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, config_id=tripleo_puppet_step1, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-central, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-central-container, build-date=2025-11-19T00:11:59Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, batch=17.1_20251118.1, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-central, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=container-puppet-ceilometer, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 02:54:51 localhost ovs-vsctl[52280]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Nov 23 02:54:51 localhost puppet-user[51939]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 23 02:54:51 localhost puppet-user[51939]: (file: /etc/puppet/hiera.yaml) Nov 23 02:54:51 localhost puppet-user[51939]: Warning: Undefined variable '::deploy_config_name'; Nov 23 02:54:51 localhost puppet-user[51939]: (file & line not available) Nov 23 02:54:51 localhost puppet-user[51933]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 23 02:54:51 localhost puppet-user[51933]: (file: /etc/puppet/hiera.yaml) Nov 23 02:54:51 localhost puppet-user[51933]: Warning: Undefined variable '::deploy_config_name'; Nov 23 02:54:51 localhost puppet-user[51933]: (file & line not available) Nov 23 02:54:51 localhost puppet-user[51939]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 23 02:54:51 localhost puppet-user[51939]: (file & line not available) Nov 23 02:54:51 localhost puppet-user[51935]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 23 02:54:51 localhost puppet-user[51935]: (file: /etc/puppet/hiera.yaml) Nov 23 02:54:51 localhost puppet-user[51935]: Warning: Undefined variable '::deploy_config_name'; Nov 23 02:54:51 localhost puppet-user[51935]: (file & line not available) Nov 23 02:54:51 localhost puppet-user[51933]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 23 02:54:51 localhost puppet-user[51933]: (file & line not available) Nov 23 02:54:51 localhost puppet-user[51975]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 23 02:54:51 localhost puppet-user[51975]: (file: /etc/puppet/hiera.yaml) Nov 23 02:54:51 localhost puppet-user[51975]: Warning: Undefined variable '::deploy_config_name'; Nov 23 02:54:51 localhost puppet-user[51975]: (file & line not available) Nov 23 02:54:51 localhost puppet-user[51939]: Notice: Accepting previously invalid value for target type 'Integer' Nov 23 02:54:51 localhost puppet-user[51935]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 23 02:54:51 localhost puppet-user[51935]: (file & line not available) Nov 23 02:54:51 localhost puppet-user[51933]: Notice: Compiled catalog for np0005532584.localdomain in environment production in 0.07 seconds Nov 23 02:54:51 localhost puppet-user[51975]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 23 02:54:51 localhost puppet-user[51975]: (file & line not available) Nov 23 02:54:52 localhost puppet-user[51939]: Notice: Compiled catalog for np0005532584.localdomain in environment production in 0.12 seconds Nov 23 02:54:52 localhost puppet-user[51933]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0' Nov 23 02:54:52 localhost puppet-user[51933]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created Nov 23 02:54:52 localhost puppet-user[51975]: Notice: Compiled catalog for np0005532584.localdomain in environment production in 0.09 seconds Nov 23 02:54:52 localhost puppet-user[51964]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 23 02:54:52 localhost puppet-user[51964]: (file: /etc/puppet/hiera.yaml) Nov 23 02:54:52 localhost puppet-user[51964]: Warning: Undefined variable '::deploy_config_name'; Nov 23 02:54:52 localhost puppet-user[51964]: (file & line not available) Nov 23 02:54:52 localhost puppet-user[51939]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root' Nov 23 02:54:52 localhost puppet-user[51939]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root' Nov 23 02:54:52 localhost puppet-user[51939]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755' Nov 23 02:54:52 localhost puppet-user[51939]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created Nov 23 02:54:52 localhost puppet-user[51933]: Notice: Applied catalog in 0.04 seconds Nov 23 02:54:52 localhost puppet-user[51939]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}7095de99f08358bf703305381024ab15cfb0fe9988eba2f49a4f47d71e8e2547' Nov 23 02:54:52 localhost puppet-user[51933]: Application: Nov 23 02:54:52 localhost puppet-user[51933]: Initial environment: production Nov 23 02:54:52 localhost puppet-user[51933]: Converged environment: production Nov 23 02:54:52 localhost puppet-user[51933]: Run mode: user Nov 23 02:54:52 localhost puppet-user[51933]: Changes: Nov 23 02:54:52 localhost puppet-user[51933]: Total: 2 Nov 23 02:54:52 localhost puppet-user[51933]: Events: Nov 23 02:54:52 localhost puppet-user[51933]: Success: 2 Nov 23 02:54:52 localhost puppet-user[51933]: Total: 2 Nov 23 02:54:52 localhost puppet-user[51933]: Resources: Nov 23 02:54:52 localhost puppet-user[51933]: Changed: 2 Nov 23 02:54:52 localhost puppet-user[51933]: Out of sync: 2 Nov 23 02:54:52 localhost puppet-user[51933]: Skipped: 7 Nov 23 02:54:52 localhost puppet-user[51933]: Total: 9 Nov 23 02:54:52 localhost puppet-user[51933]: Time: Nov 23 02:54:52 localhost puppet-user[51933]: File: 0.01 Nov 23 02:54:52 localhost puppet-user[51933]: Cron: 0.01 Nov 23 02:54:52 localhost puppet-user[51933]: Transaction evaluation: 0.04 Nov 23 02:54:52 localhost puppet-user[51933]: Catalog application: 0.04 Nov 23 02:54:52 localhost puppet-user[51933]: Config retrieval: 0.10 Nov 23 02:54:52 localhost puppet-user[51933]: Last run: 1763884492 Nov 23 02:54:52 localhost puppet-user[51933]: Total: 0.04 Nov 23 02:54:52 localhost puppet-user[51933]: Version: Nov 23 02:54:52 localhost puppet-user[51933]: Config: 1763884491 Nov 23 02:54:52 localhost puppet-user[51933]: Puppet: 7.10.0 Nov 23 02:54:52 localhost puppet-user[51939]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created Nov 23 02:54:52 localhost puppet-user[51939]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created Nov 23 02:54:52 localhost puppet-user[51939]: Notice: Applied catalog in 0.03 seconds Nov 23 02:54:52 localhost puppet-user[51939]: Application: Nov 23 02:54:52 localhost puppet-user[51939]: Initial environment: production Nov 23 02:54:52 localhost puppet-user[51939]: Converged environment: production Nov 23 02:54:52 localhost puppet-user[51939]: Run mode: user Nov 23 02:54:52 localhost puppet-user[51939]: Changes: Nov 23 02:54:52 localhost puppet-user[51939]: Total: 7 Nov 23 02:54:52 localhost puppet-user[51939]: Events: Nov 23 02:54:52 localhost puppet-user[51939]: Success: 7 Nov 23 02:54:52 localhost puppet-user[51939]: Total: 7 Nov 23 02:54:52 localhost puppet-user[51939]: Resources: Nov 23 02:54:52 localhost puppet-user[51939]: Skipped: 13 Nov 23 02:54:52 localhost puppet-user[51939]: Changed: 5 Nov 23 02:54:52 localhost puppet-user[51939]: Out of sync: 5 Nov 23 02:54:52 localhost puppet-user[51939]: Total: 20 Nov 23 02:54:52 localhost puppet-user[51939]: Time: Nov 23 02:54:52 localhost puppet-user[51939]: File: 0.01 Nov 23 02:54:52 localhost puppet-user[51939]: Transaction evaluation: 0.03 Nov 23 02:54:52 localhost puppet-user[51939]: Catalog application: 0.03 Nov 23 02:54:52 localhost puppet-user[51939]: Config retrieval: 0.15 Nov 23 02:54:52 localhost puppet-user[51939]: Last run: 1763884492 Nov 23 02:54:52 localhost puppet-user[51939]: Total: 0.03 Nov 23 02:54:52 localhost puppet-user[51939]: Version: Nov 23 02:54:52 localhost puppet-user[51939]: Config: 1763884491 Nov 23 02:54:52 localhost puppet-user[51939]: Puppet: 7.10.0 Nov 23 02:54:52 localhost puppet-user[51964]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 23 02:54:52 localhost puppet-user[51964]: (file & line not available) Nov 23 02:54:52 localhost puppet-user[51975]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully Nov 23 02:54:52 localhost puppet-user[51975]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created Nov 23 02:54:52 localhost puppet-user[51975]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully Nov 23 02:54:52 localhost puppet-user[51964]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \ Nov 23 02:54:52 localhost puppet-user[51964]: in a future release. Use nova::cinder::os_region_name instead Nov 23 02:54:52 localhost puppet-user[51964]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \ Nov 23 02:54:52 localhost puppet-user[51964]: in a future release. Use nova::cinder::catalog_info instead Nov 23 02:54:52 localhost puppet-user[51935]: Notice: Compiled catalog for np0005532584.localdomain in environment production in 0.37 seconds Nov 23 02:54:52 localhost puppet-user[51964]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41) Nov 23 02:54:52 localhost systemd[1]: libpod-8e342098f4aa036d8b0453aec18c4b9f094cb2c768c353c3511ff40cf23d31ec.scope: Deactivated successfully. Nov 23 02:54:52 localhost systemd[1]: libpod-8e342098f4aa036d8b0453aec18c4b9f094cb2c768c353c3511ff40cf23d31ec.scope: Consumed 2.185s CPU time. Nov 23 02:54:52 localhost systemd[1]: libpod-656794b2420cdc8e7dd034fa0960809fae37a3927adf0b59ae4e0d75c71be8fb.scope: Deactivated successfully. Nov 23 02:54:52 localhost podman[51748]: 2025-11-23 07:54:52.421193177 +0000 UTC m=+3.501887236 container died 8e342098f4aa036d8b0453aec18c4b9f094cb2c768c353c3511ff40cf23d31ec (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_puppet_step1, container_name=container-puppet-metrics_qdr, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1) Nov 23 02:54:52 localhost podman[51749]: 2025-11-23 07:54:52.422176827 +0000 UTC m=+3.495943909 container died 656794b2420cdc8e7dd034fa0960809fae37a3927adf0b59ae4e0d75c71be8fb (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_puppet_step1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 cron, container_name=container-puppet-crond) Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1' Nov 23 02:54:52 localhost systemd[1]: libpod-656794b2420cdc8e7dd034fa0960809fae37a3927adf0b59ae4e0d75c71be8fb.scope: Consumed 2.187s CPU time. Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640' Nov 23 02:54:52 localhost puppet-user[51964]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5) Nov 23 02:54:52 localhost puppet-user[51964]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5) Nov 23 02:54:52 localhost puppet-user[51964]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5) Nov 23 02:54:52 localhost puppet-user[51964]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated Nov 23 02:54:52 localhost puppet-user[51975]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully Nov 23 02:54:52 localhost puppet-user[51975]: Notice: Applied catalog in 0.46 seconds Nov 23 02:54:52 localhost puppet-user[51975]: Application: Nov 23 02:54:52 localhost puppet-user[51975]: Initial environment: production Nov 23 02:54:52 localhost puppet-user[51975]: Converged environment: production Nov 23 02:54:52 localhost puppet-user[51975]: Run mode: user Nov 23 02:54:52 localhost puppet-user[51975]: Changes: Nov 23 02:54:52 localhost puppet-user[51975]: Total: 4 Nov 23 02:54:52 localhost puppet-user[51975]: Events: Nov 23 02:54:52 localhost puppet-user[51975]: Success: 4 Nov 23 02:54:52 localhost puppet-user[51975]: Total: 4 Nov 23 02:54:52 localhost puppet-user[51975]: Resources: Nov 23 02:54:52 localhost puppet-user[51975]: Changed: 4 Nov 23 02:54:52 localhost puppet-user[51975]: Out of sync: 4 Nov 23 02:54:52 localhost puppet-user[51975]: Skipped: 8 Nov 23 02:54:52 localhost puppet-user[51975]: Total: 13 Nov 23 02:54:52 localhost puppet-user[51975]: Time: Nov 23 02:54:52 localhost puppet-user[51975]: File: 0.00 Nov 23 02:54:52 localhost puppet-user[51975]: Exec: 0.07 Nov 23 02:54:52 localhost puppet-user[51975]: Config retrieval: 0.12 Nov 23 02:54:52 localhost puppet-user[51975]: Augeas: 0.37 Nov 23 02:54:52 localhost puppet-user[51975]: Transaction evaluation: 0.45 Nov 23 02:54:52 localhost puppet-user[51975]: Catalog application: 0.46 Nov 23 02:54:52 localhost puppet-user[51975]: Last run: 1763884492 Nov 23 02:54:52 localhost puppet-user[51975]: Total: 0.46 Nov 23 02:54:52 localhost puppet-user[51975]: Version: Nov 23 02:54:52 localhost puppet-user[51975]: Config: 1763884491 Nov 23 02:54:52 localhost puppet-user[51975]: Puppet: 7.10.0 Nov 23 02:54:52 localhost puppet-user[51964]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set. Nov 23 02:54:52 localhost puppet-user[51964]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance Nov 23 02:54:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8e342098f4aa036d8b0453aec18c4b9f094cb2c768c353c3511ff40cf23d31ec-userdata-shm.mount: Deactivated successfully. Nov 23 02:54:52 localhost systemd[1]: var-lib-containers-storage-overlay-89da295cc978178bcac81850ec3c33a266a0d96aba327524b68608394214d41a-merged.mount: Deactivated successfully. Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee' Nov 23 02:54:52 localhost podman[52473]: 2025-11-23 07:54:52.574852853 +0000 UTC m=+0.137253892 container cleanup 8e342098f4aa036d8b0453aec18c4b9f094cb2c768c353c3511ff40cf23d31ec (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, url=https://www.redhat.com) Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb' Nov 23 02:54:52 localhost systemd[1]: libpod-conmon-8e342098f4aa036d8b0453aec18c4b9f094cb2c768c353c3511ff40cf23d31ec.scope: Deactivated successfully. Nov 23 02:54:52 localhost podman[52472]: 2025-11-23 07:54:52.580732558 +0000 UTC m=+0.145160651 container cleanup 656794b2420cdc8e7dd034fa0960809fae37a3927adf0b59ae4e0d75c71be8fb (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-18T22:49:32Z, tcib_managed=true, config_id=tripleo_puppet_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=container-puppet-crond) Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af' Nov 23 02:54:52 localhost systemd[1]: libpod-conmon-656794b2420cdc8e7dd034fa0960809fae37a3927adf0b59ae4e0d75c71be8fb.scope: Deactivated successfully. Nov 23 02:54:52 localhost python3[51569]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532584 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}dee3f10cb1ff461ac3f1e743a5ef3f06993398c6c829895de1dae7f242a64b39' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62' Nov 23 02:54:52 localhost puppet-user[51935]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed Nov 23 02:54:52 localhost puppet-user[51935]: Notice: Applied catalog in 0.31 seconds Nov 23 02:54:52 localhost puppet-user[51935]: Application: Nov 23 02:54:52 localhost puppet-user[51935]: Initial environment: production Nov 23 02:54:52 localhost puppet-user[51935]: Converged environment: production Nov 23 02:54:52 localhost puppet-user[51935]: Run mode: user Nov 23 02:54:52 localhost puppet-user[51935]: Changes: Nov 23 02:54:52 localhost puppet-user[51935]: Total: 43 Nov 23 02:54:52 localhost puppet-user[51935]: Events: Nov 23 02:54:52 localhost puppet-user[51935]: Success: 43 Nov 23 02:54:52 localhost puppet-user[51935]: Total: 43 Nov 23 02:54:52 localhost puppet-user[51935]: Resources: Nov 23 02:54:52 localhost puppet-user[51935]: Skipped: 14 Nov 23 02:54:52 localhost puppet-user[51935]: Changed: 38 Nov 23 02:54:52 localhost puppet-user[51935]: Out of sync: 38 Nov 23 02:54:52 localhost puppet-user[51935]: Total: 82 Nov 23 02:54:52 localhost puppet-user[51935]: Time: Nov 23 02:54:52 localhost puppet-user[51935]: Concat file: 0.00 Nov 23 02:54:52 localhost puppet-user[51935]: Concat fragment: 0.00 Nov 23 02:54:52 localhost puppet-user[51935]: File: 0.13 Nov 23 02:54:52 localhost puppet-user[51935]: Transaction evaluation: 0.30 Nov 23 02:54:52 localhost puppet-user[51935]: Catalog application: 0.31 Nov 23 02:54:52 localhost puppet-user[51935]: Config retrieval: 0.45 Nov 23 02:54:52 localhost puppet-user[51935]: Last run: 1763884492 Nov 23 02:54:52 localhost puppet-user[51935]: Total: 0.31 Nov 23 02:54:52 localhost puppet-user[51935]: Version: Nov 23 02:54:52 localhost puppet-user[51935]: Config: 1763884491 Nov 23 02:54:52 localhost puppet-user[51935]: Puppet: 7.10.0 Nov 23 02:54:52 localhost python3[51569]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532584 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::qdr#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Nov 23 02:54:52 localhost puppet-user[51964]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used. Nov 23 02:54:52 localhost systemd[1]: libpod-4a4a0ffd3bb03912fba6f6708c66612cdb602e73e5bdedf4fccb64b352501442.scope: Deactivated successfully. Nov 23 02:54:52 localhost systemd[1]: libpod-4a4a0ffd3bb03912fba6f6708c66612cdb602e73e5bdedf4fccb64b352501442.scope: Consumed 2.561s CPU time. Nov 23 02:54:52 localhost podman[52764]: 2025-11-23 07:54:52.976890491 +0000 UTC m=+0.051096026 container died 4a4a0ffd3bb03912fba6f6708c66612cdb602e73e5bdedf4fccb64b352501442 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-iscsid-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_puppet_step1, name=rhosp17/openstack-iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=container-puppet-iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com) Nov 23 02:54:53 localhost systemd[1]: var-lib-containers-storage-overlay-01a78c4dbe79ec91dd17b016c5deea447203862889bb5c5908b4f002506791c6-merged.mount: Deactivated successfully. Nov 23 02:54:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4a4a0ffd3bb03912fba6f6708c66612cdb602e73e5bdedf4fccb64b352501442-userdata-shm.mount: Deactivated successfully. Nov 23 02:54:53 localhost systemd[1]: var-lib-containers-storage-overlay-c5f6f274beb3204479b30adda5eae6174b870ceb64a77b37313304db216ec22a-merged.mount: Deactivated successfully. Nov 23 02:54:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-656794b2420cdc8e7dd034fa0960809fae37a3927adf0b59ae4e0d75c71be8fb-userdata-shm.mount: Deactivated successfully. Nov 23 02:54:53 localhost podman[52764]: 2025-11-23 07:54:53.038445954 +0000 UTC m=+0.112651469 container cleanup 4a4a0ffd3bb03912fba6f6708c66612cdb602e73e5bdedf4fccb64b352501442 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-iscsid, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 23 02:54:53 localhost systemd[1]: libpod-conmon-4a4a0ffd3bb03912fba6f6708c66612cdb602e73e5bdedf4fccb64b352501442.scope: Deactivated successfully. Nov 23 02:54:53 localhost python3[51569]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532584 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::iscsid#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Nov 23 02:54:53 localhost systemd[1]: libpod-e89c4e82a237d95d904dd08fec3f316f7ee6ea094e7680d228ab979a3ca3311f.scope: Deactivated successfully. Nov 23 02:54:53 localhost podman[52773]: 2025-11-23 07:54:53.068370215 +0000 UTC m=+0.113084064 container create ebb518694b69e3a895e3cbd98b8882382a06ed125610d8feaea99e473e64e066 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, container_name=container-puppet-rsyslog, build-date=2025-11-18T22:49:49Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 02:54:53 localhost systemd[1]: libpod-e89c4e82a237d95d904dd08fec3f316f7ee6ea094e7680d228ab979a3ca3311f.scope: Consumed 2.770s CPU time. Nov 23 02:54:53 localhost systemd[1]: Started libpod-conmon-ebb518694b69e3a895e3cbd98b8882382a06ed125610d8feaea99e473e64e066.scope. Nov 23 02:54:53 localhost systemd[1]: Started libcrun container. Nov 23 02:54:53 localhost podman[52773]: 2025-11-23 07:54:52.999100289 +0000 UTC m=+0.043814138 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Nov 23 02:54:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e546b43998c4baeffb211550b485789cc63029e7ea531c7f56d56436d5a4e45a/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 23 02:54:53 localhost podman[52773]: 2025-11-23 07:54:53.112303904 +0000 UTC m=+0.157017763 container init ebb518694b69e3a895e3cbd98b8882382a06ed125610d8feaea99e473e64e066 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, container_name=container-puppet-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:49Z, config_id=tripleo_puppet_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog) Nov 23 02:54:53 localhost podman[52773]: 2025-11-23 07:54:53.123385882 +0000 UTC m=+0.168099751 container start ebb518694b69e3a895e3cbd98b8882382a06ed125610d8feaea99e473e64e066 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, name=rhosp17/openstack-rsyslog, container_name=container-puppet-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:49Z, io.buildah.version=1.41.4) Nov 23 02:54:53 localhost podman[52773]: 2025-11-23 07:54:53.12364546 +0000 UTC m=+0.168359319 container attach ebb518694b69e3a895e3cbd98b8882382a06ed125610d8feaea99e473e64e066 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=container-puppet-rsyslog, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:49Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-rsyslog-container, config_id=tripleo_puppet_step1, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044) Nov 23 02:54:53 localhost podman[51762]: 2025-11-23 07:54:53.124541969 +0000 UTC m=+4.176354411 container died e89c4e82a237d95d904dd08fec3f316f7ee6ea094e7680d228ab979a3ca3311f (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, container_name=container-puppet-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1) Nov 23 02:54:53 localhost podman[52854]: 2025-11-23 07:54:53.209186217 +0000 UTC m=+0.124627215 container cleanup e89c4e82a237d95d904dd08fec3f316f7ee6ea094e7680d228ab979a3ca3311f (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-collectd, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1) Nov 23 02:54:53 localhost systemd[1]: libpod-conmon-e89c4e82a237d95d904dd08fec3f316f7ee6ea094e7680d228ab979a3ca3311f.scope: Deactivated successfully. Nov 23 02:54:53 localhost python3[51569]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532584 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Nov 23 02:54:53 localhost podman[52832]: 2025-11-23 07:54:53.158133984 +0000 UTC m=+0.089518773 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Nov 23 02:54:53 localhost podman[52832]: 2025-11-23 07:54:53.263101941 +0000 UTC m=+0.194486750 container create e85231ec2e9ac989d4a05967c8ecee50bf2192d67a5312e71a42be3fa10fc73f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, config_id=tripleo_puppet_step1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-ovn_controller, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 02:54:53 localhost systemd[1]: Started libpod-conmon-e85231ec2e9ac989d4a05967c8ecee50bf2192d67a5312e71a42be3fa10fc73f.scope. Nov 23 02:54:53 localhost systemd[1]: Started libcrun container. Nov 23 02:54:53 localhost puppet-user[51964]: Notice: Compiled catalog for np0005532584.localdomain in environment production in 1.28 seconds Nov 23 02:54:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f21f97cd586034c01317a093c3b8fe15f0b3ee8df18d3620dc21eab57815fe/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff) Nov 23 02:54:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f21f97cd586034c01317a093c3b8fe15f0b3ee8df18d3620dc21eab57815fe/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 23 02:54:53 localhost podman[52832]: 2025-11-23 07:54:53.372066793 +0000 UTC m=+0.303451572 container init e85231ec2e9ac989d4a05967c8ecee50bf2192d67a5312e71a42be3fa10fc73f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, config_id=tripleo_puppet_step1, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, container_name=container-puppet-ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 02:54:53 localhost podman[52832]: 2025-11-23 07:54:53.378852397 +0000 UTC m=+0.310237216 container start e85231ec2e9ac989d4a05967c8ecee50bf2192d67a5312e71a42be3fa10fc73f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, config_id=tripleo_puppet_step1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 23 02:54:53 localhost podman[52832]: 2025-11-23 07:54:53.379402674 +0000 UTC m=+0.310787443 container attach e85231ec2e9ac989d4a05967c8ecee50bf2192d67a5312e71a42be3fa10fc73f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=container-puppet-ovn_controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-ovn-controller, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible) Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}3fd4b82820ca431560a9101649124ba519ce5d6bf5755c5a232928b76e10eb6c' Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe' Nov 23 02:54:53 localhost puppet-user[51964]: Warning: Empty environment setting 'TLS_PASSWORD' Nov 23 02:54:53 localhost puppet-user[51964]: (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182) Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}bf4205704c2ce3336692c7289c134cb4f34ad9637d3b2e0917c09fb097bf6f77' Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created Nov 23 02:54:53 localhost puppet-user[52147]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 23 02:54:53 localhost puppet-user[52147]: (file: /etc/puppet/hiera.yaml) Nov 23 02:54:53 localhost puppet-user[52147]: Warning: Undefined variable '::deploy_config_name'; Nov 23 02:54:53 localhost puppet-user[52147]: (file & line not available) Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created Nov 23 02:54:53 localhost puppet-user[52147]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 23 02:54:53 localhost puppet-user[52147]: (file & line not available) Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created Nov 23 02:54:53 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39) Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39) Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39) Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39) Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39) Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39) Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39) Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39) Nov 23 02:54:54 localhost systemd[1]: var-lib-containers-storage-overlay-8e7bd885aaea36c7d0396504cf30e2cbd3831af3abfb23178a603b6dde37fd5f-merged.mount: Deactivated successfully. Nov 23 02:54:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e89c4e82a237d95d904dd08fec3f316f7ee6ea094e7680d228ab979a3ca3311f-userdata-shm.mount: Deactivated successfully. Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25) Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25) Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28) Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25) Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29) Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23) Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26) Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33) Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36) Nov 23 02:54:54 localhost puppet-user[52147]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26) Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: Compiled catalog for np0005532584.localdomain in environment production in 0.46 seconds Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created Nov 23 02:54:54 localhost puppet-user[52147]: Notice: Applied catalog in 0.43 seconds Nov 23 02:54:54 localhost puppet-user[52147]: Application: Nov 23 02:54:54 localhost puppet-user[52147]: Initial environment: production Nov 23 02:54:54 localhost puppet-user[52147]: Converged environment: production Nov 23 02:54:54 localhost puppet-user[52147]: Run mode: user Nov 23 02:54:54 localhost puppet-user[52147]: Changes: Nov 23 02:54:54 localhost puppet-user[52147]: Total: 31 Nov 23 02:54:54 localhost puppet-user[52147]: Events: Nov 23 02:54:54 localhost puppet-user[52147]: Success: 31 Nov 23 02:54:54 localhost puppet-user[52147]: Total: 31 Nov 23 02:54:54 localhost puppet-user[52147]: Resources: Nov 23 02:54:54 localhost puppet-user[52147]: Skipped: 22 Nov 23 02:54:54 localhost puppet-user[52147]: Changed: 31 Nov 23 02:54:54 localhost puppet-user[52147]: Out of sync: 31 Nov 23 02:54:54 localhost puppet-user[52147]: Total: 151 Nov 23 02:54:54 localhost puppet-user[52147]: Time: Nov 23 02:54:54 localhost puppet-user[52147]: Package: 0.03 Nov 23 02:54:54 localhost puppet-user[52147]: Ceilometer config: 0.33 Nov 23 02:54:54 localhost puppet-user[52147]: Transaction evaluation: 0.42 Nov 23 02:54:54 localhost puppet-user[52147]: Catalog application: 0.43 Nov 23 02:54:54 localhost puppet-user[52147]: Config retrieval: 0.53 Nov 23 02:54:54 localhost puppet-user[52147]: Last run: 1763884494 Nov 23 02:54:54 localhost puppet-user[52147]: Resources: 0.00 Nov 23 02:54:54 localhost puppet-user[52147]: Total: 0.43 Nov 23 02:54:54 localhost puppet-user[52147]: Version: Nov 23 02:54:54 localhost puppet-user[52147]: Config: 1763884493 Nov 23 02:54:54 localhost puppet-user[52147]: Puppet: 7.10.0 Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created Nov 23 02:54:54 localhost puppet-user[52887]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 23 02:54:54 localhost puppet-user[52887]: (file: /etc/puppet/hiera.yaml) Nov 23 02:54:54 localhost puppet-user[52887]: Warning: Undefined variable '::deploy_config_name'; Nov 23 02:54:54 localhost puppet-user[52887]: (file & line not available) Nov 23 02:54:54 localhost puppet-user[52887]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 23 02:54:54 localhost puppet-user[52887]: (file & line not available) Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created Nov 23 02:54:54 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created Nov 23 02:54:55 localhost puppet-user[52887]: Notice: Compiled catalog for np0005532584.localdomain in environment production in 0.23 seconds Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created Nov 23 02:54:55 localhost puppet-user[52887]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2' Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created Nov 23 02:54:55 localhost puppet-user[52887]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b' Nov 23 02:54:55 localhost puppet-user[52887]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}4da6dc08baec702f079680519d2e298399a583eadaa49a3586a2a6c552f15f0d' Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}7457979272b158ac88adf13552cc58cb87586b19a7b8e2158301712e847fdf72' Nov 23 02:54:55 localhost puppet-user[52887]: Notice: Applied catalog in 0.12 seconds Nov 23 02:54:55 localhost puppet-user[52887]: Application: Nov 23 02:54:55 localhost puppet-user[52887]: Initial environment: production Nov 23 02:54:55 localhost puppet-user[52887]: Converged environment: production Nov 23 02:54:55 localhost puppet-user[52887]: Run mode: user Nov 23 02:54:55 localhost puppet-user[52887]: Changes: Nov 23 02:54:55 localhost puppet-user[52887]: Total: 3 Nov 23 02:54:55 localhost puppet-user[52887]: Events: Nov 23 02:54:55 localhost puppet-user[52887]: Success: 3 Nov 23 02:54:55 localhost puppet-user[52887]: Total: 3 Nov 23 02:54:55 localhost puppet-user[52887]: Resources: Nov 23 02:54:55 localhost puppet-user[52887]: Skipped: 11 Nov 23 02:54:55 localhost puppet-user[52887]: Changed: 3 Nov 23 02:54:55 localhost puppet-user[52887]: Out of sync: 3 Nov 23 02:54:55 localhost puppet-user[52887]: Total: 25 Nov 23 02:54:55 localhost puppet-user[52887]: Time: Nov 23 02:54:55 localhost puppet-user[52887]: Concat file: 0.00 Nov 23 02:54:55 localhost puppet-user[52887]: Concat fragment: 0.00 Nov 23 02:54:55 localhost puppet-user[52887]: File: 0.01 Nov 23 02:54:55 localhost puppet-user[52887]: Transaction evaluation: 0.11 Nov 23 02:54:55 localhost puppet-user[52887]: Catalog application: 0.12 Nov 23 02:54:55 localhost puppet-user[52887]: Config retrieval: 0.28 Nov 23 02:54:55 localhost puppet-user[52887]: Last run: 1763884495 Nov 23 02:54:55 localhost puppet-user[52887]: Total: 0.12 Nov 23 02:54:55 localhost puppet-user[52887]: Version: Nov 23 02:54:55 localhost puppet-user[52887]: Config: 1763884494 Nov 23 02:54:55 localhost puppet-user[52887]: Puppet: 7.10.0 Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created Nov 23 02:54:55 localhost puppet-user[52958]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 23 02:54:55 localhost puppet-user[52958]: (file: /etc/puppet/hiera.yaml) Nov 23 02:54:55 localhost puppet-user[52958]: Warning: Undefined variable '::deploy_config_name'; Nov 23 02:54:55 localhost puppet-user[52958]: (file & line not available) Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created Nov 23 02:54:55 localhost systemd[1]: libpod-6d4e406f786a71969aa6dba08d95244d156890d689e60a731aae0cb86832aadb.scope: Deactivated successfully. Nov 23 02:54:55 localhost systemd[1]: libpod-6d4e406f786a71969aa6dba08d95244d156890d689e60a731aae0cb86832aadb.scope: Consumed 3.290s CPU time. Nov 23 02:54:55 localhost podman[52053]: 2025-11-23 07:54:55.330732145 +0000 UTC m=+3.848009977 container died 6d4e406f786a71969aa6dba08d95244d156890d689e60a731aae0cb86832aadb (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, build-date=2025-11-19T00:11:59Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-ceilometer-central-container, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-ceilometer-central, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, container_name=container-puppet-ceilometer, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central) Nov 23 02:54:55 localhost puppet-user[52958]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 23 02:54:55 localhost puppet-user[52958]: (file & line not available) Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created Nov 23 02:54:55 localhost systemd[1]: tmp-crun.xlgzPF.mount: Deactivated successfully. Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created Nov 23 02:54:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d4e406f786a71969aa6dba08d95244d156890d689e60a731aae0cb86832aadb-userdata-shm.mount: Deactivated successfully. Nov 23 02:54:55 localhost systemd[1]: var-lib-containers-storage-overlay-610a23939626ba33bbc4ff5fdde23dbb8b9d397a6884923c98e37f79be82869c-merged.mount: Deactivated successfully. Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created Nov 23 02:54:55 localhost podman[53230]: 2025-11-23 07:54:55.466065136 +0000 UTC m=+0.124237174 container cleanup 6d4e406f786a71969aa6dba08d95244d156890d689e60a731aae0cb86832aadb (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-central-container, description=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.12, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:59Z, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-central, container_name=container-puppet-ceilometer, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true) Nov 23 02:54:55 localhost systemd[1]: libpod-conmon-6d4e406f786a71969aa6dba08d95244d156890d689e60a731aae0cb86832aadb.scope: Deactivated successfully. Nov 23 02:54:55 localhost python3[51569]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532584 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::ceilometer::agent::polling#012include tripleo::profile::base::ceilometer::agent::polling#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created Nov 23 02:54:55 localhost puppet-user[52958]: Notice: Compiled catalog for np0005532584.localdomain in environment production in 0.26 seconds Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created Nov 23 02:54:55 localhost ovs-vsctl[53289]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642 Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created Nov 23 02:54:55 localhost systemd[1]: libpod-ebb518694b69e3a895e3cbd98b8882382a06ed125610d8feaea99e473e64e066.scope: Deactivated successfully. Nov 23 02:54:55 localhost systemd[1]: libpod-ebb518694b69e3a895e3cbd98b8882382a06ed125610d8feaea99e473e64e066.scope: Consumed 2.335s CPU time. Nov 23 02:54:55 localhost puppet-user[52958]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created Nov 23 02:54:55 localhost podman[52773]: 2025-11-23 07:54:55.625897446 +0000 UTC m=+2.670611325 container died ebb518694b69e3a895e3cbd98b8882382a06ed125610d8feaea99e473e64e066 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, name=rhosp17/openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=container-puppet-rsyslog, io.openshift.expose-services=) Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created Nov 23 02:54:55 localhost ovs-vsctl[53303]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created Nov 23 02:54:55 localhost puppet-user[52958]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created Nov 23 02:54:55 localhost ovs-vsctl[53310]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.106 Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created Nov 23 02:54:55 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created Nov 23 02:54:55 localhost puppet-user[52958]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created Nov 23 02:54:55 localhost ovs-vsctl[53324]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005532584.localdomain Nov 23 02:54:55 localhost puppet-user[52958]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005532584.novalocal' to 'np0005532584.localdomain' Nov 23 02:54:55 localhost ovs-vsctl[53326]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int Nov 23 02:54:55 localhost puppet-user[52958]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created Nov 23 02:54:55 localhost ovs-vsctl[53328]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000 Nov 23 02:54:55 localhost puppet-user[52958]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created Nov 23 02:54:55 localhost ovs-vsctl[53330]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60 Nov 23 02:54:55 localhost puppet-user[52958]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created Nov 23 02:54:55 localhost ovs-vsctl[53332]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true Nov 23 02:54:55 localhost puppet-user[52958]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created Nov 23 02:54:55 localhost ovs-vsctl[53334]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000 Nov 23 02:54:55 localhost puppet-user[52958]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created Nov 23 02:54:55 localhost ovs-vsctl[53336]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0 Nov 23 02:54:55 localhost puppet-user[52958]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created Nov 23 02:54:55 localhost ovs-vsctl[53338]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:5c:1e:ed Nov 23 02:54:55 localhost puppet-user[52958]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created Nov 23 02:54:55 localhost ovs-vsctl[53340]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex Nov 23 02:54:55 localhost puppet-user[52958]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created Nov 23 02:54:55 localhost ovs-vsctl[53342]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false Nov 23 02:54:55 localhost puppet-user[52958]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created Nov 23 02:54:56 localhost ovs-vsctl[53344]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0 Nov 23 02:54:56 localhost puppet-user[52958]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created Nov 23 02:54:56 localhost puppet-user[52958]: Notice: Applied catalog in 0.51 seconds Nov 23 02:54:56 localhost puppet-user[52958]: Application: Nov 23 02:54:56 localhost puppet-user[52958]: Initial environment: production Nov 23 02:54:56 localhost puppet-user[52958]: Converged environment: production Nov 23 02:54:56 localhost puppet-user[52958]: Run mode: user Nov 23 02:54:56 localhost puppet-user[52958]: Changes: Nov 23 02:54:56 localhost puppet-user[52958]: Total: 14 Nov 23 02:54:56 localhost puppet-user[52958]: Events: Nov 23 02:54:56 localhost puppet-user[52958]: Success: 14 Nov 23 02:54:56 localhost puppet-user[52958]: Total: 14 Nov 23 02:54:56 localhost puppet-user[52958]: Resources: Nov 23 02:54:56 localhost puppet-user[52958]: Skipped: 12 Nov 23 02:54:56 localhost puppet-user[52958]: Changed: 14 Nov 23 02:54:56 localhost puppet-user[52958]: Out of sync: 14 Nov 23 02:54:56 localhost puppet-user[52958]: Total: 29 Nov 23 02:54:56 localhost puppet-user[52958]: Time: Nov 23 02:54:56 localhost puppet-user[52958]: Exec: 0.02 Nov 23 02:54:56 localhost puppet-user[52958]: Config retrieval: 0.29 Nov 23 02:54:56 localhost puppet-user[52958]: Vs config: 0.44 Nov 23 02:54:56 localhost puppet-user[52958]: Transaction evaluation: 0.48 Nov 23 02:54:56 localhost puppet-user[52958]: Catalog application: 0.51 Nov 23 02:54:56 localhost puppet-user[52958]: Last run: 1763884496 Nov 23 02:54:56 localhost puppet-user[52958]: Total: 0.51 Nov 23 02:54:56 localhost puppet-user[52958]: Version: Nov 23 02:54:56 localhost puppet-user[52958]: Config: 1763884495 Nov 23 02:54:56 localhost puppet-user[52958]: Puppet: 7.10.0 Nov 23 02:54:56 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully Nov 23 02:54:56 localhost systemd[1]: tmp-crun.dNuE4m.mount: Deactivated successfully. Nov 23 02:54:56 localhost systemd[1]: var-lib-containers-storage-overlay-e546b43998c4baeffb211550b485789cc63029e7ea531c7f56d56436d5a4e45a-merged.mount: Deactivated successfully. Nov 23 02:54:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ebb518694b69e3a895e3cbd98b8882382a06ed125610d8feaea99e473e64e066-userdata-shm.mount: Deactivated successfully. Nov 23 02:54:56 localhost systemd[1]: libpod-e85231ec2e9ac989d4a05967c8ecee50bf2192d67a5312e71a42be3fa10fc73f.scope: Deactivated successfully. Nov 23 02:54:56 localhost systemd[1]: libpod-e85231ec2e9ac989d4a05967c8ecee50bf2192d67a5312e71a42be3fa10fc73f.scope: Consumed 2.986s CPU time. Nov 23 02:54:56 localhost podman[52832]: 2025-11-23 07:54:56.556846867 +0000 UTC m=+3.488231656 container died e85231ec2e9ac989d4a05967c8ecee50bf2192d67a5312e71a42be3fa10fc73f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, architecture=x86_64, release=1761123044, container_name=container-puppet-ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 23 02:54:56 localhost systemd[1]: tmp-crun.WRzMBd.mount: Deactivated successfully. Nov 23 02:54:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e85231ec2e9ac989d4a05967c8ecee50bf2192d67a5312e71a42be3fa10fc73f-userdata-shm.mount: Deactivated successfully. Nov 23 02:54:56 localhost podman[53297]: 2025-11-23 07:54:56.702816822 +0000 UTC m=+1.065634562 container cleanup ebb518694b69e3a895e3cbd98b8882382a06ed125610d8feaea99e473e64e066 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, name=rhosp17/openstack-rsyslog, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, container_name=container-puppet-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044) Nov 23 02:54:56 localhost python3[51569]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532584 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Nov 23 02:54:56 localhost systemd[1]: libpod-conmon-ebb518694b69e3a895e3cbd98b8882382a06ed125610d8feaea99e473e64e066.scope: Deactivated successfully. Nov 23 02:54:56 localhost podman[53387]: 2025-11-23 07:54:56.721353565 +0000 UTC m=+0.150094746 container cleanup e85231ec2e9ac989d4a05967c8ecee50bf2192d67a5312e71a42be3fa10fc73f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=container-puppet-ovn_controller, build-date=2025-11-18T23:34:05Z, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64) Nov 23 02:54:56 localhost podman[52961]: 2025-11-23 07:54:53.488112089 +0000 UTC m=+0.037800389 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Nov 23 02:54:56 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully Nov 23 02:54:56 localhost systemd[1]: libpod-conmon-e85231ec2e9ac989d4a05967c8ecee50bf2192d67a5312e71a42be3fa10fc73f.scope: Deactivated successfully. Nov 23 02:54:56 localhost python3[51569]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532584 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::agents::ovn#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Nov 23 02:54:56 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created Nov 23 02:54:56 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created Nov 23 02:54:56 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created Nov 23 02:54:56 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created Nov 23 02:54:56 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created Nov 23 02:54:57 localhost podman[53467]: 2025-11-23 07:54:57.027097118 +0000 UTC m=+0.092867028 container create 2b36fc47679dff5b8c0ba69736d06cab59efafdcb7422ee55e210887be303a2a (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:23:27Z, com.redhat.component=openstack-neutron-server-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, tcib_managed=true, container_name=container-puppet-neutron, description=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_puppet_step1) Nov 23 02:54:57 localhost systemd[1]: Started libpod-conmon-2b36fc47679dff5b8c0ba69736d06cab59efafdcb7422ee55e210887be303a2a.scope. Nov 23 02:54:57 localhost systemd[1]: Started libcrun container. Nov 23 02:54:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82f11e702c10ae17f894fa5e812d3704c88386aaaee36f2042dd2d177fa374d/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 23 02:54:57 localhost podman[53467]: 2025-11-23 07:54:56.977149869 +0000 UTC m=+0.042919799 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Nov 23 02:54:57 localhost podman[53467]: 2025-11-23 07:54:57.085784382 +0000 UTC m=+0.151554312 container init 2b36fc47679dff5b8c0ba69736d06cab59efafdcb7422ee55e210887be303a2a (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:23:27Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-neutron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-server, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, name=rhosp17/openstack-neutron-server, io.buildah.version=1.41.4) Nov 23 02:54:57 localhost podman[53467]: 2025-11-23 07:54:57.098073147 +0000 UTC m=+0.163843077 container start 2b36fc47679dff5b8c0ba69736d06cab59efafdcb7422ee55e210887be303a2a (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, config_id=tripleo_puppet_step1, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, container_name=container-puppet-neutron, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-neutron-server, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-19T00:23:27Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-server-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Nov 23 02:54:57 localhost podman[53467]: 2025-11-23 07:54:57.098465679 +0000 UTC m=+0.164235659 container attach 2b36fc47679dff5b8c0ba69736d06cab59efafdcb7422ee55e210887be303a2a (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, com.redhat.component=openstack-neutron-server-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-11-19T00:23:27Z, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-server, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, container_name=container-puppet-neutron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1) Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created Nov 23 02:54:57 localhost systemd[1]: var-lib-containers-storage-overlay-27f21f97cd586034c01317a093c3b8fe15f0b3ee8df18d3620dc21eab57815fe-merged.mount: Deactivated successfully. Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created Nov 23 02:54:57 localhost puppet-user[51964]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}3ccd56cc76ec60fa08fd698d282c9c89b1e8c485a00f47d57569ed8f6f8a16e4' Nov 23 02:54:57 localhost puppet-user[51964]: Notice: Applied catalog in 4.34 seconds Nov 23 02:54:57 localhost puppet-user[51964]: Application: Nov 23 02:54:57 localhost puppet-user[51964]: Initial environment: production Nov 23 02:54:57 localhost puppet-user[51964]: Converged environment: production Nov 23 02:54:57 localhost puppet-user[51964]: Run mode: user Nov 23 02:54:57 localhost puppet-user[51964]: Changes: Nov 23 02:54:57 localhost puppet-user[51964]: Total: 183 Nov 23 02:54:57 localhost puppet-user[51964]: Events: Nov 23 02:54:57 localhost puppet-user[51964]: Success: 183 Nov 23 02:54:57 localhost puppet-user[51964]: Total: 183 Nov 23 02:54:57 localhost puppet-user[51964]: Resources: Nov 23 02:54:57 localhost puppet-user[51964]: Changed: 183 Nov 23 02:54:57 localhost puppet-user[51964]: Out of sync: 183 Nov 23 02:54:57 localhost puppet-user[51964]: Skipped: 57 Nov 23 02:54:57 localhost puppet-user[51964]: Total: 487 Nov 23 02:54:57 localhost puppet-user[51964]: Time: Nov 23 02:54:57 localhost puppet-user[51964]: Concat file: 0.00 Nov 23 02:54:57 localhost puppet-user[51964]: Concat fragment: 0.00 Nov 23 02:54:57 localhost puppet-user[51964]: Anchor: 0.00 Nov 23 02:54:57 localhost puppet-user[51964]: File line: 0.00 Nov 23 02:54:57 localhost puppet-user[51964]: Virtlogd config: 0.00 Nov 23 02:54:57 localhost puppet-user[51964]: Virtstoraged config: 0.01 Nov 23 02:54:57 localhost puppet-user[51964]: Virtqemud config: 0.01 Nov 23 02:54:57 localhost puppet-user[51964]: Virtnodedevd config: 0.01 Nov 23 02:54:57 localhost puppet-user[51964]: Virtsecretd config: 0.01 Nov 23 02:54:57 localhost puppet-user[51964]: Exec: 0.02 Nov 23 02:54:57 localhost puppet-user[51964]: File: 0.02 Nov 23 02:54:57 localhost puppet-user[51964]: Virtproxyd config: 0.03 Nov 23 02:54:57 localhost puppet-user[51964]: Package: 0.03 Nov 23 02:54:57 localhost puppet-user[51964]: Augeas: 1.02 Nov 23 02:54:57 localhost puppet-user[51964]: Config retrieval: 1.56 Nov 23 02:54:57 localhost puppet-user[51964]: Last run: 1763884497 Nov 23 02:54:57 localhost puppet-user[51964]: Nova config: 2.93 Nov 23 02:54:57 localhost puppet-user[51964]: Transaction evaluation: 4.33 Nov 23 02:54:57 localhost puppet-user[51964]: Catalog application: 4.34 Nov 23 02:54:57 localhost puppet-user[51964]: Resources: 0.00 Nov 23 02:54:57 localhost puppet-user[51964]: Total: 4.34 Nov 23 02:54:57 localhost puppet-user[51964]: Version: Nov 23 02:54:57 localhost puppet-user[51964]: Config: 1763884492 Nov 23 02:54:57 localhost puppet-user[51964]: Puppet: 7.10.0 Nov 23 02:54:58 localhost systemd[1]: libpod-20e8608d5f0304545f7e90af7716e976913763acdffaf653a31a667fdda9aa1a.scope: Deactivated successfully. Nov 23 02:54:58 localhost systemd[1]: libpod-20e8608d5f0304545f7e90af7716e976913763acdffaf653a31a667fdda9aa1a.scope: Consumed 8.566s CPU time. Nov 23 02:54:58 localhost podman[51776]: 2025-11-23 07:54:58.821338955 +0000 UTC m=+9.860769238 container died 20e8608d5f0304545f7e90af7716e976913763acdffaf653a31a667fdda9aa1a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=container-puppet-nova_libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64) Nov 23 02:54:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-20e8608d5f0304545f7e90af7716e976913763acdffaf653a31a667fdda9aa1a-userdata-shm.mount: Deactivated successfully. Nov 23 02:54:58 localhost systemd[1]: var-lib-containers-storage-overlay-1cf48a48241a85605ccaaed1dc6be1d7729c38cf732d2c362c3403cf7e38c508-merged.mount: Deactivated successfully. Nov 23 02:54:58 localhost podman[53542]: 2025-11-23 07:54:58.964715889 +0000 UTC m=+0.132975468 container cleanup 20e8608d5f0304545f7e90af7716e976913763acdffaf653a31a667fdda9aa1a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, container_name=container-puppet-nova_libvirt, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, release=1761123044, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible) Nov 23 02:54:58 localhost systemd[1]: libpod-conmon-20e8608d5f0304545f7e90af7716e976913763acdffaf653a31a667fdda9aa1a.scope: Deactivated successfully. Nov 23 02:54:58 localhost python3[51569]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532584 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages#012# TODO(emilien): figure how to deal with libvirt profile.#012# We'll probably treat it like we do with Neutron plugins.#012# Until then, just include it in the default nova-compute role.#012include tripleo::profile::base::nova::compute::libvirt#012#012include tripleo::profile::base::nova::libvirt#012#012include tripleo::profile::base::nova::compute::libvirt_guests#012#012include tripleo::profile::base::sshd#012include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 02:54:59 localhost puppet-user[53499]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass Nov 23 02:54:59 localhost puppet-user[53499]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 23 02:54:59 localhost puppet-user[53499]: (file: /etc/puppet/hiera.yaml) Nov 23 02:54:59 localhost puppet-user[53499]: Warning: Undefined variable '::deploy_config_name'; Nov 23 02:54:59 localhost puppet-user[53499]: (file & line not available) Nov 23 02:54:59 localhost puppet-user[53499]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 23 02:54:59 localhost puppet-user[53499]: (file & line not available) Nov 23 02:54:59 localhost puppet-user[53499]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37) Nov 23 02:54:59 localhost puppet-user[53499]: Notice: Compiled catalog for np0005532584.localdomain in environment production in 0.60 seconds Nov 23 02:54:59 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created Nov 23 02:54:59 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created Nov 23 02:54:59 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created Nov 23 02:54:59 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created Nov 23 02:54:59 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created Nov 23 02:54:59 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created Nov 23 02:54:59 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created Nov 23 02:54:59 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created Nov 23 02:54:59 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created Nov 23 02:54:59 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created Nov 23 02:55:00 localhost puppet-user[53499]: Notice: Applied catalog in 0.46 seconds Nov 23 02:55:00 localhost puppet-user[53499]: Application: Nov 23 02:55:00 localhost puppet-user[53499]: Initial environment: production Nov 23 02:55:00 localhost puppet-user[53499]: Converged environment: production Nov 23 02:55:00 localhost puppet-user[53499]: Run mode: user Nov 23 02:55:00 localhost puppet-user[53499]: Changes: Nov 23 02:55:00 localhost puppet-user[53499]: Total: 33 Nov 23 02:55:00 localhost puppet-user[53499]: Events: Nov 23 02:55:00 localhost puppet-user[53499]: Success: 33 Nov 23 02:55:00 localhost puppet-user[53499]: Total: 33 Nov 23 02:55:00 localhost puppet-user[53499]: Resources: Nov 23 02:55:00 localhost puppet-user[53499]: Skipped: 21 Nov 23 02:55:00 localhost puppet-user[53499]: Changed: 33 Nov 23 02:55:00 localhost puppet-user[53499]: Out of sync: 33 Nov 23 02:55:00 localhost puppet-user[53499]: Total: 155 Nov 23 02:55:00 localhost puppet-user[53499]: Time: Nov 23 02:55:00 localhost puppet-user[53499]: Resources: 0.00 Nov 23 02:55:00 localhost puppet-user[53499]: Ovn metadata agent config: 0.02 Nov 23 02:55:00 localhost puppet-user[53499]: Neutron config: 0.38 Nov 23 02:55:00 localhost puppet-user[53499]: Transaction evaluation: 0.45 Nov 23 02:55:00 localhost puppet-user[53499]: Catalog application: 0.46 Nov 23 02:55:00 localhost puppet-user[53499]: Config retrieval: 0.66 Nov 23 02:55:00 localhost puppet-user[53499]: Last run: 1763884500 Nov 23 02:55:00 localhost puppet-user[53499]: Total: 0.46 Nov 23 02:55:00 localhost puppet-user[53499]: Version: Nov 23 02:55:00 localhost puppet-user[53499]: Config: 1763884499 Nov 23 02:55:00 localhost puppet-user[53499]: Puppet: 7.10.0 Nov 23 02:55:00 localhost systemd[1]: libpod-2b36fc47679dff5b8c0ba69736d06cab59efafdcb7422ee55e210887be303a2a.scope: Deactivated successfully. Nov 23 02:55:00 localhost systemd[1]: libpod-2b36fc47679dff5b8c0ba69736d06cab59efafdcb7422ee55e210887be303a2a.scope: Consumed 3.818s CPU time. Nov 23 02:55:00 localhost podman[53467]: 2025-11-23 07:55:00.918461416 +0000 UTC m=+3.984231356 container died 2b36fc47679dff5b8c0ba69736d06cab59efafdcb7422ee55e210887be303a2a (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, container_name=container-puppet-neutron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:23:27Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, release=1761123044, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.12, name=rhosp17/openstack-neutron-server, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-server-container, architecture=x86_64) Nov 23 02:55:01 localhost systemd[1]: tmp-crun.m9eArP.mount: Deactivated successfully. Nov 23 02:55:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b36fc47679dff5b8c0ba69736d06cab59efafdcb7422ee55e210887be303a2a-userdata-shm.mount: Deactivated successfully. Nov 23 02:55:01 localhost systemd[1]: var-lib-containers-storage-overlay-b82f11e702c10ae17f894fa5e812d3704c88386aaaee36f2042dd2d177fa374d-merged.mount: Deactivated successfully. Nov 23 02:55:01 localhost podman[53681]: 2025-11-23 07:55:01.088257799 +0000 UTC m=+0.155315400 container cleanup 2b36fc47679dff5b8c0ba69736d06cab59efafdcb7422ee55e210887be303a2a (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, tcib_managed=true, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-server-container, container_name=container-puppet-neutron, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, version=17.1.12, build-date=2025-11-19T00:23:27Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-server, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 02:55:01 localhost systemd[1]: libpod-conmon-2b36fc47679dff5b8c0ba69736d06cab59efafdcb7422ee55e210887be303a2a.scope: Deactivated successfully. Nov 23 02:55:01 localhost python3[51569]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532584 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::ovn_metadata#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532584', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Nov 23 02:55:01 localhost python3[53734]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:55:02 localhost python3[53766]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:55:03 localhost python3[53816]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:55:03 localhost python3[53859]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884503.0512936-84589-30585816909036/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:55:04 localhost python3[53921]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:55:04 localhost python3[53964]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884503.9349403-84589-16289759770470/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:55:05 localhost python3[54026]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:55:05 localhost python3[54069]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884504.8322518-84725-273992985908606/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:55:06 localhost python3[54131]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:55:06 localhost python3[54174]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884505.7923212-84762-195380472571086/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:55:07 localhost python3[54204]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 02:55:07 localhost systemd[1]: Reloading. Nov 23 02:55:07 localhost systemd-rc-local-generator[54226]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:55:07 localhost systemd-sysv-generator[54229]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:55:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:55:07 localhost systemd[1]: Reloading. Nov 23 02:55:07 localhost systemd-sysv-generator[54267]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:55:07 localhost systemd-rc-local-generator[54263]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:55:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:55:07 localhost systemd[1]: Starting TripleO Container Shutdown... Nov 23 02:55:07 localhost systemd[1]: Finished TripleO Container Shutdown. Nov 23 02:55:08 localhost python3[54327]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:55:08 localhost python3[54370]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884507.7978034-84803-259789671884486/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:55:09 localhost python3[54432]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 02:55:09 localhost python3[54475]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884509.0341046-84841-280667045515057/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:55:10 localhost sshd[54506]: main: sshd: ssh-rsa algorithm is disabled Nov 23 02:55:10 localhost python3[54505]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 02:55:10 localhost systemd[1]: Reloading. Nov 23 02:55:10 localhost systemd-sysv-generator[54532]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:55:10 localhost systemd-rc-local-generator[54528]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:55:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:55:10 localhost systemd[1]: Reloading. Nov 23 02:55:10 localhost systemd-sysv-generator[54573]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:55:10 localhost systemd-rc-local-generator[54567]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:55:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:55:10 localhost systemd[1]: Starting Create netns directory... Nov 23 02:55:10 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 23 02:55:10 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 23 02:55:10 localhost systemd[1]: Finished Create netns directory. Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: 4786e46dc7f8a50dc71419c2225b2915 Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: da9a0dc7b40588672419e3ce10063e21 Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: 97cfc313337c76270fcb8497fac0e51e Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: b43218eec4380850a20e0a337fdcf6cf Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: b43218eec4380850a20e0a337fdcf6cf Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: b43218eec4380850a20e0a337fdcf6cf Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: b43218eec4380850a20e0a337fdcf6cf Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: b43218eec4380850a20e0a337fdcf6cf Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: b43218eec4380850a20e0a337fdcf6cf Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: 531adc347d750bec89c43b39996bf2b8 Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: cdd192006d3eee4976a7ad00d48f6c64 Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: cdd192006d3eee4976a7ad00d48f6c64 Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: b43218eec4380850a20e0a337fdcf6cf Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: b43218eec4380850a20e0a337fdcf6cf Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: aab643b40a0a602c64733b2a96099834 Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: 97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf Nov 23 02:55:11 localhost python3[54599]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: b43218eec4380850a20e0a337fdcf6cf Nov 23 02:55:12 localhost python3[54657]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Nov 23 02:55:13 localhost podman[54696]: 2025-11-23 07:55:13.245374083 +0000 UTC m=+0.077029960 container create 36efc4ea1ac1592299c7269c33478b2e4d325ab185e852795f3eaa0ef1cc2de5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=metrics_qdr_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.) Nov 23 02:55:13 localhost systemd[1]: Started libpod-conmon-36efc4ea1ac1592299c7269c33478b2e4d325ab185e852795f3eaa0ef1cc2de5.scope. Nov 23 02:55:13 localhost podman[54696]: 2025-11-23 07:55:13.201424153 +0000 UTC m=+0.033080050 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Nov 23 02:55:13 localhost systemd[1]: Started libcrun container. Nov 23 02:55:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b513f0c97c82d1e5153446ab8d6ba6a01710ca2380b841dfea84b567972b770b/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Nov 23 02:55:13 localhost podman[54696]: 2025-11-23 07:55:13.330792046 +0000 UTC m=+0.162447913 container init 36efc4ea1ac1592299c7269c33478b2e4d325ab185e852795f3eaa0ef1cc2de5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr_init_logs, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vendor=Red Hat, Inc.) Nov 23 02:55:13 localhost podman[54696]: 2025-11-23 07:55:13.348445011 +0000 UTC m=+0.180100888 container start 36efc4ea1ac1592299c7269c33478b2e4d325ab185e852795f3eaa0ef1cc2de5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=metrics_qdr_init_logs, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, maintainer=OpenStack TripleO Team) Nov 23 02:55:13 localhost podman[54696]: 2025-11-23 07:55:13.348807122 +0000 UTC m=+0.180463009 container attach 36efc4ea1ac1592299c7269c33478b2e4d325ab185e852795f3eaa0ef1cc2de5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, vcs-type=git, name=rhosp17/openstack-qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr_init_logs, config_id=tripleo_step1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com) Nov 23 02:55:13 localhost systemd[1]: libpod-36efc4ea1ac1592299c7269c33478b2e4d325ab185e852795f3eaa0ef1cc2de5.scope: Deactivated successfully. Nov 23 02:55:13 localhost podman[54696]: 2025-11-23 07:55:13.35638615 +0000 UTC m=+0.188042047 container died 36efc4ea1ac1592299c7269c33478b2e4d325ab185e852795f3eaa0ef1cc2de5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr_init_logs, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 02:55:13 localhost podman[54715]: 2025-11-23 07:55:13.453905403 +0000 UTC m=+0.082940766 container cleanup 36efc4ea1ac1592299c7269c33478b2e4d325ab185e852795f3eaa0ef1cc2de5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr_init_logs, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Nov 23 02:55:13 localhost systemd[1]: libpod-conmon-36efc4ea1ac1592299c7269c33478b2e4d325ab185e852795f3eaa0ef1cc2de5.scope: Deactivated successfully. Nov 23 02:55:13 localhost python3[54657]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd Nov 23 02:55:13 localhost podman[54790]: 2025-11-23 07:55:13.948474957 +0000 UTC m=+0.092848687 container create 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=metrics_qdr, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1761123044, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 02:55:13 localhost systemd[1]: Started libpod-conmon-2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.scope. Nov 23 02:55:14 localhost systemd[1]: Started libcrun container. Nov 23 02:55:14 localhost podman[54790]: 2025-11-23 07:55:13.905087215 +0000 UTC m=+0.049460955 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Nov 23 02:55:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab41077e04905cd2ed47da0e447cf096133dba9a29e9494f8fcc86ce48952daa/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Nov 23 02:55:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab41077e04905cd2ed47da0e447cf096133dba9a29e9494f8fcc86ce48952daa/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff) Nov 23 02:55:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 02:55:14 localhost podman[54790]: 2025-11-23 07:55:14.041634053 +0000 UTC m=+0.186007783 container init 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, tcib_managed=true) Nov 23 02:55:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 02:55:14 localhost podman[54790]: 2025-11-23 07:55:14.073212786 +0000 UTC m=+0.217586516 container start 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step1) Nov 23 02:55:14 localhost python3[54657]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=4786e46dc7f8a50dc71419c2225b2915 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Nov 23 02:55:14 localhost podman[54811]: 2025-11-23 07:55:14.176791269 +0000 UTC m=+0.090453052 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z) Nov 23 02:55:14 localhost systemd[1]: tmp-crun.amM3K2.mount: Deactivated successfully. Nov 23 02:55:14 localhost systemd[1]: var-lib-containers-storage-overlay-b513f0c97c82d1e5153446ab8d6ba6a01710ca2380b841dfea84b567972b770b-merged.mount: Deactivated successfully. Nov 23 02:55:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36efc4ea1ac1592299c7269c33478b2e4d325ab185e852795f3eaa0ef1cc2de5-userdata-shm.mount: Deactivated successfully. Nov 23 02:55:14 localhost podman[54811]: 2025-11-23 07:55:14.387227679 +0000 UTC m=+0.300889422 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 02:55:14 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 02:55:14 localhost python3[54883]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:55:14 localhost python3[54899]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 02:55:15 localhost python3[54960]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884515.0224068-85036-226131824328619/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:55:15 localhost python3[54976]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 02:55:15 localhost systemd[1]: Reloading. Nov 23 02:55:16 localhost systemd-rc-local-generator[54997]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:55:16 localhost systemd-sysv-generator[55004]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:55:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:55:16 localhost python3[55028]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 02:55:16 localhost systemd[1]: Reloading. Nov 23 02:55:17 localhost systemd-rc-local-generator[55053]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 02:55:17 localhost systemd-sysv-generator[55058]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 02:55:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 02:55:17 localhost systemd[1]: Starting metrics_qdr container... Nov 23 02:55:17 localhost systemd[1]: Started metrics_qdr container. Nov 23 02:55:17 localhost python3[55108]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:55:19 localhost python3[55229]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005532584 step=1 update_config_hash_only=False Nov 23 02:55:19 localhost python3[55245]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 02:55:20 localhost python3[55261]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Nov 23 02:55:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 02:55:44 localhost podman[55262]: 2025-11-23 07:55:44.906100087 +0000 UTC m=+0.091198276 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, url=https://www.redhat.com) Nov 23 02:55:45 localhost podman[55262]: 2025-11-23 07:55:45.109364182 +0000 UTC m=+0.294462301 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, url=https://www.redhat.com, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Nov 23 02:55:45 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 02:56:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 02:56:15 localhost podman[55367]: 2025-11-23 07:56:15.904291739 +0000 UTC m=+0.090119472 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Nov 23 02:56:16 localhost podman[55367]: 2025-11-23 07:56:16.103456634 +0000 UTC m=+0.289284307 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.buildah.version=1.41.4, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 02:56:16 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 02:56:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 02:56:46 localhost podman[55395]: 2025-11-23 07:56:46.899820266 +0000 UTC m=+0.071902736 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 02:56:47 localhost podman[55395]: 2025-11-23 07:56:47.092472096 +0000 UTC m=+0.264554546 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 02:56:47 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 02:57:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 02:57:17 localhost systemd[1]: tmp-crun.3AlkfL.mount: Deactivated successfully. Nov 23 02:57:17 localhost podman[55502]: 2025-11-23 07:57:17.892731261 +0000 UTC m=+0.081477366 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Nov 23 02:57:18 localhost podman[55502]: 2025-11-23 07:57:18.106470213 +0000 UTC m=+0.295216318 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=metrics_qdr, config_id=tripleo_step1, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64) Nov 23 02:57:18 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 02:57:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 02:57:48 localhost systemd[1]: tmp-crun.M0CbCc.mount: Deactivated successfully. Nov 23 02:57:48 localhost podman[55529]: 2025-11-23 07:57:48.906088738 +0000 UTC m=+0.084360465 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, container_name=metrics_qdr) Nov 23 02:57:49 localhost podman[55529]: 2025-11-23 07:57:49.140487948 +0000 UTC m=+0.318759685 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, distribution-scope=public, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 02:57:49 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 02:58:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 02:58:19 localhost podman[55634]: 2025-11-23 07:58:19.893675186 +0000 UTC m=+0.075337614 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, release=1761123044) Nov 23 02:58:20 localhost podman[55634]: 2025-11-23 07:58:20.079478171 +0000 UTC m=+0.261140649 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64) Nov 23 02:58:20 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 02:58:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 02:58:50 localhost systemd[1]: tmp-crun.Jzi0Fe.mount: Deactivated successfully. Nov 23 02:58:50 localhost podman[55663]: 2025-11-23 07:58:50.900895366 +0000 UTC m=+0.083015434 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step1, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 02:58:51 localhost podman[55663]: 2025-11-23 07:58:51.091226702 +0000 UTC m=+0.273346700 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, version=17.1.12, vcs-type=git, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 02:58:51 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 02:59:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 02:59:21 localhost podman[55768]: 2025-11-23 07:59:21.879376709 +0000 UTC m=+0.068290974 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible) Nov 23 02:59:22 localhost podman[55768]: 2025-11-23 07:59:22.07339776 +0000 UTC m=+0.262312035 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Nov 23 02:59:22 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 02:59:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 02:59:52 localhost systemd[1]: tmp-crun.V4GHPU.mount: Deactivated successfully. Nov 23 02:59:52 localhost podman[55797]: 2025-11-23 07:59:52.909303578 +0000 UTC m=+0.091395994 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 23 02:59:53 localhost podman[55797]: 2025-11-23 07:59:53.118599127 +0000 UTC m=+0.300691553 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044) Nov 23 02:59:53 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:00:00 localhost ceph-osd[31569]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2,1,3] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:01 localhost ceph-osd[31569]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2,1,3] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:03 localhost ceph-osd[31569]: osd.2 pg_epoch: 21 pg[3.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1,2,0] r=1 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:05 localhost ceph-osd[32534]: osd.5 pg_epoch: 23 pg[4.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [3,5,1] r=1 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:07 localhost ceph-osd[31569]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [4,3,2] r=2 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:23 localhost ceph-osd[32534]: osd.5 pg_epoch: 31 pg[6.0( empty local-lis/les=0/0 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [0,5,1] r=1 lpr=31 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:00:23 localhost podman[55905]: 2025-11-23 08:00:23.89506603 +0000 UTC m=+0.084028705 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 23 03:00:24 localhost podman[55905]: 2025-11-23 08:00:24.083398083 +0000 UTC m=+0.272360768 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, distribution-scope=public, config_id=tripleo_step1, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.12, io.openshift.expose-services=, architecture=x86_64) Nov 23 03:00:24 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:00:24 localhost ceph-osd[32534]: osd.5 pg_epoch: 33 pg[7.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [5,1,3] r=0 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:25 localhost ceph-osd[32534]: osd.5 pg_epoch: 34 pg[7.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [5,1,3] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:33 localhost ceph-osd[31569]: osd.2 pg_epoch: 38 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=38 pruub=10.739858627s) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active pruub 1122.224609375s@ mbc={}] start_peering_interval up [1,2,0] -> [1,2,0], acting [1,2,0] -> [1,2,0], acting_primary 1 -> 1, up_primary 1 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:33 localhost ceph-osd[31569]: osd.2 pg_epoch: 38 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=38 pruub=8.046980858s) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active pruub 1119.531860352s@ mbc={}] start_peering_interval up [2,1,3] -> [2,1,3], acting [2,1,3] -> [2,1,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:33 localhost ceph-osd[31569]: osd.2 pg_epoch: 38 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=38 pruub=8.046980858s) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown pruub 1119.531860352s@ mbc={}] state: transitioning to Primary Nov 23 03:00:33 localhost ceph-osd[31569]: osd.2 pg_epoch: 38 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=38 pruub=10.735472679s) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.224609375s@ mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.19( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.19( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.18( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.18( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.17( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.16( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.16( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.17( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.15( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.14( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.14( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.13( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.15( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.12( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.12( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.13( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.10( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.11( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.10( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.11( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.f( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.e( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.f( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.e( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.d( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.c( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.c( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.a( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.d( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.a( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.b( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.b( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.3( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.2( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.1( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.1( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.7( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.2( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.6( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.3( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.4( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.5( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.4( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.5( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.6( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.7( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.8( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.9( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.9( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.1b( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.8( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.1b( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.1a( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.1d( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.1a( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.1c( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.1c( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.1d( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.1f( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.1e( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.1f( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[3.1e( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=1 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.0( empty local-lis/les=38/39 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.d( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.e( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.10( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.f( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.12( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.11( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.13( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.17( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.c( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.16( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.15( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.b( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.18( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.a( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.14( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.1( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.7( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.2( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.19( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.1f( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.1e( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.1c( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.3( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.4( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.1d( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.5( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.1b( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.1a( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.9( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.8( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: osd.2 pg_epoch: 39 pg[2.6( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:34 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 2.0 deep-scrub starts Nov 23 03:00:34 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 2.0 deep-scrub ok Nov 23 03:00:35 localhost ceph-osd[31569]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40 pruub=12.424643517s) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active pruub 1125.927612305s@ mbc={}] start_peering_interval up [4,3,2] -> [4,3,2], acting [4,3,2] -> [4,3,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:35 localhost ceph-osd[31569]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40 pruub=12.420834541s) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.927612305s@ mbc={}] state: transitioning to Stray Nov 23 03:00:35 localhost ceph-osd[32534]: osd.5 pg_epoch: 40 pg[4.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=40 pruub=10.211827278s) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active pruub 1119.374877930s@ mbc={}] start_peering_interval up [3,5,1] -> [3,5,1], acting [3,5,1] -> [3,5,1], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:35 localhost ceph-osd[32534]: osd.5 pg_epoch: 40 pg[4.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=40 pruub=10.207508087s) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1119.374877930s@ mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[31569]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=2 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.18( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.17( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.16( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.15( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.14( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.13( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.12( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.10( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.f( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.e( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.c( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.11( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.b( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.d( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.2( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.19( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.4( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.9( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.1a( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.5( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.6( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.1( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.3( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.8( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.7( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.1b( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.1c( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.1d( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.1e( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.1f( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:36 localhost ceph-osd[32534]: osd.5 pg_epoch: 41 pg[4.a( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=1 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:37 localhost ceph-osd[32534]: osd.5 pg_epoch: 42 pg[6.0( empty local-lis/les=31/32 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=42 pruub=10.350888252s) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 active pruub 1121.575927734s@ mbc={}] start_peering_interval up [0,5,1] -> [0,5,1], acting [0,5,1] -> [0,5,1], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:37 localhost ceph-osd[32534]: osd.5 pg_epoch: 42 pg[7.0( v 35'39 (0'0,35'39] local-lis/les=33/34 n=22 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=42 pruub=12.375782967s) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 35'38 mlcod 35'38 active pruub 1123.600952148s@ mbc={}] start_peering_interval up [5,1,3] -> [5,1,3], acting [5,1,3] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:37 localhost ceph-osd[32534]: osd.5 pg_epoch: 42 pg[7.0( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=42 pruub=12.375782967s) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 35'38 mlcod 0'0 unknown pruub 1123.600952148s@ mbc={}] state: transitioning to Primary Nov 23 03:00:37 localhost ceph-osd[32534]: osd.5 pg_epoch: 42 pg[6.0( empty local-lis/les=31/32 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=42 pruub=10.347999573s) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1121.575927734s@ mbc={}] state: transitioning to Stray Nov 23 03:00:37 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 2.d deep-scrub starts Nov 23 03:00:37 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 2.d deep-scrub ok Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.1e( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.19( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.a( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.4( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.5( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.1c( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.1f( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.8( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.18( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.7( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.b( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.6( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.3( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.1( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.2( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.9( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.1d( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.e( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.f( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.1b( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.c( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.d( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.12( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.13( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.11( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.16( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.17( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.15( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.14( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.1a( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[6.10( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=1 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.c( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.d( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.e( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.3( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.f( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.8( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.1( v 35'39 (0'0,35'39] local-lis/les=33/34 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.2( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.7( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.a( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.6( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.9( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.5( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.4( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.0( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 35'38 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.e( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.1( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.8( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.2( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.6( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.c( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.a( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.4( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.b( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.5( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.9( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.d( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.3( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:38 localhost ceph-osd[32534]: osd.5 pg_epoch: 43 pg[7.7( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=0 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:39 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 2.e scrub starts Nov 23 03:00:39 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 2.e scrub ok Nov 23 03:00:42 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 7.0 scrub starts Nov 23 03:00:42 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 7.0 scrub ok Nov 23 03:00:42 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 2.f deep-scrub starts Nov 23 03:00:43 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 7.8 scrub starts Nov 23 03:00:43 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 7.8 scrub ok Nov 23 03:00:43 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 2.10 scrub starts Nov 23 03:00:43 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 2.10 scrub ok Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.790876389s) [3,1,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.517822266s@ mbc={}] start_peering_interval up [1,2,0] -> [3,1,5], acting [1,2,0] -> [3,1,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.814764977s) [0,2,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541748047s@ mbc={}] start_peering_interval up [4,3,2] -> [0,2,4], acting [4,3,2] -> [0,2,4], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.790781021s) [3,1,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.517822266s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.802331924s) [2,4,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.529296875s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,3], acting [4,3,2] -> [2,4,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.814617157s) [0,2,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.541748047s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.802331924s) [2,4,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.529296875s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.19( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.790396690s) [0,2,1] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.517700195s@ mbc={}] start_peering_interval up [1,2,0] -> [0,2,1], acting [1,2,0] -> [0,2,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.19( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.790370941s) [0,2,1] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.517700195s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.793103218s) [3,2,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.520629883s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.792814255s) [4,2,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.520385742s@ mbc={}] start_peering_interval up [2,1,3] -> [4,2,3], acting [2,1,3] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.793058395s) [3,2,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.520629883s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.813873291s) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541381836s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,0], acting [4,3,2] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.813873291s) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.541381836s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.792753220s) [4,2,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.520385742s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.790924072s) [2,3,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.518798828s@ mbc={}] start_peering_interval up [1,2,0] -> [2,3,4], acting [1,2,0] -> [2,3,4], acting_primary 1 -> 2, up_primary 1 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.790924072s) [2,3,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.518798828s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.812409401s) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.540283203s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,0], acting [4,3,2] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.812409401s) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.540283203s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.801419258s) [0,5,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.529541016s@ mbc={}] start_peering_interval up [1,2,0] -> [0,5,1], acting [1,2,0] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.801374435s) [0,5,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.529541016s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,1,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.811908722s) [1,5,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.540527344s@ mbc={}] start_peering_interval up [4,3,2] -> [1,5,3], acting [4,3,2] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.791585922s) [1,0,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.520385742s@ mbc={}] start_peering_interval up [2,1,3] -> [1,0,2], acting [2,1,3] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.791560173s) [1,0,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.520385742s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.811866760s) [1,5,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.540527344s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.14( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.789567947s) [2,4,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.518310547s@ mbc={}] start_peering_interval up [1,2,0] -> [2,4,0], acting [1,2,0] -> [2,4,0], acting_primary 1 -> 2, up_primary 1 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.811433792s) [4,0,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.540283203s@ mbc={}] start_peering_interval up [4,3,2] -> [4,0,5], acting [4,3,2] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.14( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.789567947s) [2,4,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.518310547s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.811411858s) [4,0,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.540283203s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.811371803s) [3,4,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.540527344s@ mbc={}] start_peering_interval up [4,3,2] -> [3,4,5], acting [4,3,2] -> [3,4,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.14( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.791168213s) [2,4,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.520385742s@ mbc={}] start_peering_interval up [2,1,3] -> [2,4,0], acting [2,1,3] -> [2,4,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.811334610s) [3,4,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.540527344s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.789195061s) [0,5,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.518554688s@ mbc={}] start_peering_interval up [1,2,0] -> [0,5,1], acting [1,2,0] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.790474892s) [1,5,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.519775391s@ mbc={}] start_peering_interval up [2,1,3] -> [1,5,3], acting [2,1,3] -> [1,5,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.789162636s) [0,5,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.518554688s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.14( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.791168213s) [2,4,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.520385742s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.790434837s) [1,5,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.519775391s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.790169716s) [4,3,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.519653320s@ mbc={}] start_peering_interval up [2,1,3] -> [4,3,2], acting [2,1,3] -> [4,3,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.13( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.789091110s) [2,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.518676758s@ mbc={}] start_peering_interval up [1,2,0] -> [2,3,1], acting [1,2,0] -> [2,3,1], acting_primary 1 -> 2, up_primary 1 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.790127754s) [4,3,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.519653320s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.13( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.789091110s) [2,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.518676758s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[6.15( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,4,0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.810449600s) [3,1,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.540527344s@ mbc={}] start_peering_interval up [4,3,2] -> [3,1,5], acting [4,3,2] -> [3,1,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.789306641s) [4,5,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.519409180s@ mbc={}] start_peering_interval up [1,2,0] -> [4,5,0], acting [1,2,0] -> [4,5,0], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.810409546s) [3,1,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.540527344s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.788722038s) [4,0,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.518920898s@ mbc={}] start_peering_interval up [2,1,3] -> [4,0,5], acting [2,1,3] -> [4,0,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.789281845s) [4,5,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.519409180s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.788675308s) [4,0,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.518920898s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.810739517s) [1,0,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541137695s@ mbc={}] start_peering_interval up [4,3,2] -> [1,0,5], acting [4,3,2] -> [1,0,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.789269447s) [4,2,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.519653320s@ TIME_FOR_DEEP mbc={}] start_peering_interval up [2,1,3] -> [4,2,0], acting [2,1,3] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.810698509s) [1,0,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.541137695s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.789243698s) [4,2,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.519653320s@ TIME_FOR_DEEP mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.788599968s) [1,5,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.519042969s@ mbc={}] start_peering_interval up [1,2,0] -> [1,5,0], acting [1,2,0] -> [1,5,0], acting_primary 1 -> 1, up_primary 1 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.788562775s) [1,5,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.519042969s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[7.b( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.788199425s) [2,1,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.519042969s@ mbc={}] start_peering_interval up [1,2,0] -> [2,1,0], acting [1,2,0] -> [2,1,0], acting_primary 1 -> 2, up_primary 1 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.e( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.788105011s) [3,4,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.518920898s@ mbc={}] start_peering_interval up [2,1,3] -> [3,4,2], acting [2,1,3] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.e( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.788076401s) [3,4,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.518920898s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.788199425s) [2,1,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.519042969s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.809693336s) [0,1,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.540527344s@ mbc={}] start_peering_interval up [4,3,2] -> [0,1,2], acting [4,3,2] -> [0,1,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.809654236s) [0,1,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.540527344s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.781600952s) [1,5,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.512573242s@ mbc={}] start_peering_interval up [2,1,3] -> [1,5,3], acting [2,1,3] -> [1,5,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.781556129s) [1,5,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.512573242s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,0,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[6.8( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.787968636s) [1,0,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.520141602s@ mbc={}] start_peering_interval up [2,1,3] -> [1,0,5], acting [2,1,3] -> [1,0,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.809012413s) [4,0,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541259766s@ mbc={}] start_peering_interval up [4,3,2] -> [4,0,5], acting [4,3,2] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.787913322s) [1,0,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.520141602s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.808892250s) [4,0,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.541259766s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,1,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.808156013s) [3,2,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.540771484s@ mbc={}] start_peering_interval up [4,3,2] -> [3,2,4], acting [4,3,2] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.808106422s) [3,2,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.540771484s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.787281990s) [1,5,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.520385742s@ mbc={}] start_peering_interval up [2,1,3] -> [1,5,0], acting [2,1,3] -> [1,5,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.787228584s) [1,5,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.520385742s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.786955833s) [1,3,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.520385742s@ mbc={}] start_peering_interval up [2,1,3] -> [1,3,2], acting [2,1,3] -> [1,3,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.808121681s) [1,3,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541503906s@ mbc={}] start_peering_interval up [4,3,2] -> [1,3,5], acting [4,3,2] -> [1,3,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.b( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.795789719s) [3,5,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.529052734s@ mbc={}] start_peering_interval up [1,2,0] -> [3,5,1], acting [1,2,0] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.2( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.795028687s) [3,4,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.528442383s@ mbc={}] start_peering_interval up [1,2,0] -> [3,4,5], acting [1,2,0] -> [3,4,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.786899567s) [1,3,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.520385742s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.808041573s) [1,3,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.541503906s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.2( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.794989586s) [3,4,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.528442383s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.b( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.795613289s) [3,5,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.529052734s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.807971001s) [4,5,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541503906s@ mbc={}] start_peering_interval up [4,3,2] -> [4,5,0], acting [4,3,2] -> [4,5,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.807746887s) [4,5,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.541503906s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.785468102s) [0,2,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.519409180s@ mbc={}] start_peering_interval up [1,2,0] -> [0,2,4], acting [1,2,0] -> [0,2,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.785444260s) [0,2,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.519409180s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.807755470s) [3,5,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541870117s@ mbc={}] start_peering_interval up [4,3,2] -> [3,5,4], acting [4,3,2] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.1( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.786277771s) [3,4,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.520507812s@ mbc={}] start_peering_interval up [2,1,3] -> [3,4,5], acting [2,1,3] -> [3,4,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.807718277s) [3,5,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.541870117s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.785587311s) [0,2,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.519775391s@ mbc={}] start_peering_interval up [1,2,0] -> [0,2,4], acting [1,2,0] -> [0,2,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.785565376s) [0,2,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.519775391s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.1( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.786233902s) [3,4,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.520507812s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.807924271s) [0,2,1] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.542114258s@ mbc={}] start_peering_interval up [4,3,2] -> [0,2,1], acting [4,3,2] -> [0,2,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.785496712s) [2,0,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.519775391s@ mbc={}] start_peering_interval up [1,2,0] -> [2,0,4], acting [1,2,0] -> [2,0,4], acting_primary 1 -> 2, up_primary 1 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.807885170s) [0,2,1] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.542114258s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.785496712s) [2,0,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.519775391s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[7.7( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.807154655s) [0,5,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541870117s@ mbc={}] start_peering_interval up [4,3,2] -> [0,5,1], acting [4,3,2] -> [0,5,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.807128906s) [0,5,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.541870117s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.4( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.793153763s) [3,1,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.528442383s@ mbc={}] start_peering_interval up [1,2,0] -> [3,1,2], acting [1,2,0] -> [3,1,2], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.786029816s) [1,0,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.521240234s@ mbc={}] start_peering_interval up [2,1,3] -> [1,0,2], acting [2,1,3] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.785716057s) [3,1,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.520996094s@ mbc={}] start_peering_interval up [2,1,3] -> [3,1,2], acting [2,1,3] -> [3,1,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.4( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.793113708s) [3,1,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.528442383s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.785963058s) [1,0,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.521240234s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.785610199s) [3,1,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.520996094s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.806047440s) [2,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541625977s@ mbc={}] start_peering_interval up [4,3,2] -> [2,3,1], acting [4,3,2] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.806047440s) [2,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.541625977s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.793223381s) [3,1,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.528808594s@ mbc={}] start_peering_interval up [1,2,0] -> [3,1,2], acting [1,2,0] -> [3,1,2], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.793184280s) [3,1,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.528808594s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.785979271s) [3,1,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.521728516s@ mbc={}] start_peering_interval up [2,1,3] -> [3,1,5], acting [2,1,3] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.785934448s) [3,1,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.521728516s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[7.d( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.16( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.785124779s) [2,1,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.521606445s@ mbc={}] start_peering_interval up [2,1,3] -> [2,1,0], acting [2,1,3] -> [2,1,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.785124779s) [2,1,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.521606445s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,3,1] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.804385185s) [4,0,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541259766s@ mbc={}] start_peering_interval up [4,3,2] -> [4,0,2], acting [4,3,2] -> [4,0,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.804343224s) [4,0,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.541259766s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.784622192s) [3,5,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.521484375s@ mbc={}] start_peering_interval up [2,1,3] -> [3,5,4], acting [2,1,3] -> [3,5,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.784579277s) [3,5,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.521484375s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.10( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,1,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.804797173s) [3,2,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541748047s@ mbc={}] start_peering_interval up [4,3,2] -> [3,2,4], acting [4,3,2] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.792292595s) [4,0,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.529296875s@ mbc={}] start_peering_interval up [1,2,0] -> [4,0,5], acting [1,2,0] -> [4,0,5], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.804778099s) [3,2,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.541748047s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.791565895s) [4,2,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.528442383s@ mbc={}] start_peering_interval up [1,2,0] -> [4,2,3], acting [1,2,0] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.792240143s) [4,0,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.529296875s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.781046867s) [4,5,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.518066406s@ mbc={}] start_peering_interval up [1,2,0] -> [4,5,3], acting [1,2,0] -> [4,5,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.791333199s) [4,2,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.528442383s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.17( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.800980568s) [3,2,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.176879883s@ mbc={}] start_peering_interval up [3,5,1] -> [3,2,4], acting [3,5,1] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.15( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.882706642s) [2,4,0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.258544922s@ mbc={}] start_peering_interval up [0,5,1] -> [2,4,0], acting [0,5,1] -> [2,4,0], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.781001091s) [4,5,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.518066406s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.15( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.882594109s) [2,4,0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.258544922s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.1a( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.784317017s) [2,4,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.521484375s@ mbc={}] start_peering_interval up [2,1,3] -> [2,4,3], acting [2,1,3] -> [2,4,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.17( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.800791740s) [3,2,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.176879883s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.1a( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.784317017s) [2,4,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.521484375s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.804566383s) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541870117s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,0], acting [4,3,2] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.1a( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.888669968s) [5,4,0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.265014648s@ mbc={}] start_peering_interval up [0,5,1] -> [5,4,0], acting [0,5,1] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.1a( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.888669968s) [5,4,0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.265014648s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.16( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.805398941s) [0,2,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.181884766s@ mbc={}] start_peering_interval up [3,5,1] -> [0,2,1], acting [3,5,1] -> [0,2,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.16( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.805336952s) [0,2,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.181884766s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.804566383s) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.541870117s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.1a( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.780695915s) [4,3,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.518188477s@ mbc={}] start_peering_interval up [1,2,0] -> [4,3,2], acting [1,2,0] -> [4,3,2], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.15( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.800279617s) [4,3,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.177001953s@ mbc={}] start_peering_interval up [3,5,1] -> [4,3,2], acting [3,5,1] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.14( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.880728722s) [3,5,4] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.257568359s@ mbc={}] start_peering_interval up [0,5,1] -> [3,5,4], acting [0,5,1] -> [3,5,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.15( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.800252914s) [4,3,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.177001953s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.14( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.880685806s) [3,5,4] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.257568359s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.783981323s) [1,2,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.521484375s@ mbc={}] start_peering_interval up [2,1,3] -> [1,2,3], acting [2,1,3] -> [1,2,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,3,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.1a( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.780631065s) [4,3,2] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.518188477s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.783941269s) [1,2,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.521484375s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.791339874s) [1,2,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.529296875s@ mbc={}] start_peering_interval up [1,2,0] -> [1,2,3], acting [1,2,0] -> [1,2,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.782950401s) [4,2,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.520874023s@ mbc={}] start_peering_interval up [2,1,3] -> [4,2,0], acting [2,1,3] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.782888412s) [4,2,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.520874023s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.791294098s) [1,2,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.529296875s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.803816795s) [2,1,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.181274414s@ mbc={}] start_peering_interval up [3,5,1] -> [2,1,3], acting [3,5,1] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.803771973s) [2,1,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.181274414s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.16( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.879934311s) [0,5,4] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.257446289s@ mbc={}] start_peering_interval up [0,5,1] -> [0,5,4], acting [0,5,1] -> [0,5,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.11( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.887756348s) [3,1,2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.265258789s@ mbc={}] start_peering_interval up [0,5,1] -> [3,1,2], acting [0,5,1] -> [3,1,2], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.16( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.879881859s) [0,5,4] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.257446289s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.11( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.887693405s) [3,1,2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.265258789s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.803182602s) [1,5,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541748047s@ mbc={}] start_peering_interval up [4,3,2] -> [1,5,3], acting [4,3,2] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.17( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.880262375s) [1,0,2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.257690430s@ mbc={}] start_peering_interval up [0,5,1] -> [1,0,2], acting [0,5,1] -> [1,0,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.10( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.879464149s) [0,1,2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.257324219s@ mbc={}] start_peering_interval up [0,5,1] -> [0,1,2], acting [0,5,1] -> [0,1,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.17( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.879941940s) [1,0,2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.257690430s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.10( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.879429817s) [0,1,2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.257324219s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.799038887s) [0,1,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.176879883s@ mbc={}] start_peering_interval up [3,5,1] -> [0,1,2], acting [3,5,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.803484917s) [4,0,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.181396484s@ mbc={}] start_peering_interval up [3,5,1] -> [4,0,5], acting [3,5,1] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.799014091s) [0,1,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.176879883s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.803135872s) [1,5,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.541748047s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.803173065s) [2,0,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541992188s@ mbc={}] start_peering_interval up [4,3,2] -> [2,0,4], acting [4,3,2] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.803408623s) [4,0,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.181396484s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[2.17( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,1,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.803173065s) [2,0,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.541992188s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.782238007s) [4,5,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.521240234s@ mbc={}] start_peering_interval up [2,1,3] -> [4,5,0], acting [2,1,3] -> [4,5,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.802971840s) [3,5,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.181518555s@ mbc={}] start_peering_interval up [3,5,1] -> [3,5,4], acting [3,5,1] -> [3,5,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.782176018s) [4,5,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.521240234s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.802913666s) [3,5,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.181518555s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.802534103s) [0,5,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.542114258s@ mbc={}] start_peering_interval up [4,3,2] -> [0,5,1], acting [4,3,2] -> [0,5,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.789814949s) [0,5,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.529541016s@ mbc={}] start_peering_interval up [1,2,0] -> [0,5,4], acting [1,2,0] -> [0,5,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,1,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.802492142s) [0,5,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.542114258s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.789787292s) [0,5,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.529541016s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.1e( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.780883789s) [3,1,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.520751953s@ mbc={}] start_peering_interval up [2,1,3] -> [3,1,5], acting [2,1,3] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.1e( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.780850410s) [3,1,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.520751953s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.801945686s) [2,1,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541870117s@ mbc={}] start_peering_interval up [4,3,2] -> [2,1,3], acting [4,3,2] -> [2,1,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.13( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.878170967s) [3,4,2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.257324219s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,2], acting [0,5,1] -> [3,4,2], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.801945686s) [2,1,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.541870117s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.797888756s) [3,4,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.177001953s@ mbc={}] start_peering_interval up [3,5,1] -> [3,4,2], acting [3,5,1] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.13( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.878088951s) [3,4,2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.257324219s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.797846794s) [3,4,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.177001953s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.780447960s) [0,2,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.520629883s@ mbc={}] start_peering_interval up [2,1,3] -> [0,2,4], acting [2,1,3] -> [0,2,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.12( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.877992630s) [4,2,0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.257324219s@ mbc={}] start_peering_interval up [0,5,1] -> [4,2,0], acting [0,5,1] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.12( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.877881050s) [4,2,0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.257324219s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.788957596s) [3,4,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.529052734s@ mbc={}] start_peering_interval up [1,2,0] -> [3,4,5], acting [1,2,0] -> [3,4,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.797348976s) [3,4,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.176879883s@ mbc={}] start_peering_interval up [3,5,1] -> [3,4,5], acting [3,5,1] -> [3,4,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.788909912s) [3,4,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.529052734s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.797272682s) [3,4,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.176879883s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.780338287s) [0,2,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.520629883s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.1c( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.788893700s) [5,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.529541016s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,1], acting [1,2,0] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.1c( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.788856506s) [5,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.529541016s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.800991058s) [5,0,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541748047s@ mbc={}] start_peering_interval up [4,3,2] -> [5,0,1], acting [4,3,2] -> [5,0,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.800858498s) [5,1,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541625977s@ mbc={}] start_peering_interval up [4,3,2] -> [5,1,3], acting [4,3,2] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,1,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.778921127s) [5,3,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.519775391s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,4], acting [1,2,0] -> [5,3,4], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.800808907s) [5,1,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.541625977s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.800966263s) [5,0,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.541748047s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.778886795s) [5,3,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.519775391s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.779488564s) [5,0,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.520507812s@ mbc={}] start_peering_interval up [2,1,3] -> [5,0,4], acting [2,1,3] -> [5,0,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.779406548s) [5,1,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.520507812s@ mbc={}] start_peering_interval up [2,1,3] -> [5,1,3], acting [2,1,3] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.779422760s) [5,0,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.520507812s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.779378891s) [5,1,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.520507812s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.800316811s) [5,3,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541625977s@ mbc={}] start_peering_interval up [4,3,2] -> [5,3,4], acting [4,3,2] -> [5,3,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.800279617s) [5,3,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.541625977s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.779614449s) [5,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.520996094s@ mbc={}] start_peering_interval up [2,1,3] -> [5,3,1], acting [2,1,3] -> [5,3,1], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.787083626s) [5,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.528442383s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,1], acting [1,2,0] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.f( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.876893044s) [3,4,5] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.257690430s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,5], acting [0,5,1] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.f( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.876801491s) [3,4,5] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.257690430s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.779586792s) [5,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.520996094s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.787043571s) [5,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.528442383s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.d( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.777908325s) [5,1,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.519409180s@ mbc={}] start_peering_interval up [1,2,0] -> [5,1,3], acting [1,2,0] -> [5,1,3], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.c( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.795614243s) [5,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.176757812s@ mbc={}] start_peering_interval up [3,5,1] -> [5,3,1], acting [3,5,1] -> [5,3,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.e( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.876844406s) [5,3,4] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.258056641s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,4], acting [0,5,1] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.e( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.876844406s) [5,3,4] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.258056641s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.d( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.777873039s) [5,1,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.519409180s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.777475357s) [5,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.519042969s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,1], acting [1,2,0] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.c( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.795614243s) [5,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1126.176757812s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.777452469s) [5,3,1] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.519042969s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.b( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.802209854s) [0,2,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.183959961s@ mbc={}] start_peering_interval up [3,5,1] -> [0,2,4], acting [3,5,1] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.b( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.802184105s) [0,2,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.183959961s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.c( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.877211571s) [3,2,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.258911133s@ mbc={}] start_peering_interval up [0,5,1] -> [3,2,4], acting [0,5,1] -> [3,2,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.9( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.875725746s) [0,1,5] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.257568359s@ mbc={}] start_peering_interval up [0,5,1] -> [0,1,5], acting [0,5,1] -> [0,1,5], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.9( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.875684738s) [0,1,5] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.257568359s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.2( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.875923157s) [5,3,4] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.258056641s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,4], acting [0,5,1] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.c( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.876885414s) [3,2,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.258911133s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.2( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.875923157s) [5,3,4] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.258056641s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.794071198s) [1,5,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.176391602s@ mbc={}] start_peering_interval up [3,5,1] -> [1,5,3], acting [3,5,1] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,0,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.793873787s) [1,5,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.176391602s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.3( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.799715996s) [1,5,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.182495117s@ mbc={}] start_peering_interval up [3,5,1] -> [1,5,3], acting [3,5,1] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.3( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.799671173s) [1,5,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.182495117s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,1,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.1( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.876545906s) [1,5,3] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.259643555s@ mbc={}] start_peering_interval up [0,5,1] -> [1,5,3], acting [0,5,1] -> [1,5,3], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.1( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.876469612s) [1,5,3] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.259643555s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.3( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.875519753s) [5,4,0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.258789062s@ mbc={}] start_peering_interval up [0,5,1] -> [5,4,0], acting [0,5,1] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.799275398s) [4,2,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.182495117s@ mbc={}] start_peering_interval up [3,5,1] -> [4,2,0], acting [3,5,1] -> [4,2,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.1b( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.881691933s) [1,2,0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.265014648s@ mbc={}] start_peering_interval up [0,5,1] -> [1,2,0], acting [0,5,1] -> [1,2,0], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.799246788s) [4,2,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.182495117s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.1b( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.881648064s) [1,2,0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.265014648s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.19( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.798241615s) [1,3,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.181762695s@ mbc={}] start_peering_interval up [3,5,1] -> [1,3,2], acting [3,5,1] -> [1,3,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.3( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.875519753s) [5,4,0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.258789062s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.19( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.798217773s) [1,3,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.181762695s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,0,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.6( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.875452042s) [3,5,1] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.259643555s@ mbc={}] start_peering_interval up [0,5,1] -> [3,5,1], acting [0,5,1] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.798100471s) [0,1,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.182250977s@ mbc={}] start_peering_interval up [3,5,1] -> [0,1,2], acting [3,5,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.6( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.875418663s) [3,5,1] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.259643555s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.798047066s) [0,1,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.182250977s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.797337532s) [1,0,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.181762695s@ mbc={}] start_peering_interval up [3,5,1] -> [1,0,2], acting [3,5,1] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.797317505s) [1,0,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.181762695s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.18( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.874958038s) [0,2,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.259643555s@ mbc={}] start_peering_interval up [0,5,1] -> [0,2,4], acting [0,5,1] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.796986580s) [5,1,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.181762695s@ mbc={}] start_peering_interval up [3,5,1] -> [5,1,0], acting [3,5,1] -> [5,1,0], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.18( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.874904633s) [0,2,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.259643555s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.796986580s) [5,1,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1126.181762695s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.7( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.874594688s) [5,3,4] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.259521484s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,4], acting [0,5,1] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.7( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.874594688s) [5,3,4] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.259521484s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.d( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,1,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.797719002s) [2,0,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.183349609s@ mbc={}] start_peering_interval up [3,5,1] -> [2,0,1], acting [3,5,1] -> [2,0,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.797664642s) [2,0,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.183349609s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.6( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.796644211s) [4,3,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.182373047s@ mbc={}] start_peering_interval up [3,5,1] -> [4,3,2], acting [3,5,1] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.6( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.796618462s) [4,3,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.182373047s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.4( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.873330116s) [3,2,1] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.259277344s@ mbc={}] start_peering_interval up [0,5,1] -> [3,2,1], acting [0,5,1] -> [3,2,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.4( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.873305321s) [3,2,1] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.259277344s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.796882629s) [0,1,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.182983398s@ mbc={}] start_peering_interval up [3,5,1] -> [0,1,2], acting [3,5,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.796820641s) [0,1,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.182983398s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.5( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.872108459s) [5,1,0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.258300781s@ mbc={}] start_peering_interval up [0,5,1] -> [5,1,0], acting [0,5,1] -> [5,1,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[7.b( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.870752335s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1128.257202148s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.5( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.872108459s) [5,1,0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.258300781s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[7.b( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.870717049s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1128.257202148s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.a( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.871720314s) [5,0,4] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.258300781s@ mbc={}] start_peering_interval up [0,5,1] -> [5,0,4], acting [0,5,1] -> [5,0,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.796231270s) [1,2,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.182861328s@ mbc={}] start_peering_interval up [3,5,1] -> [1,2,3], acting [3,5,1] -> [1,2,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.a( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.871720314s) [5,0,4] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.258300781s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.796190262s) [1,2,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.182861328s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.1c( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.b( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.871242523s) [3,2,1] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.257812500s@ mbc={}] start_peering_interval up [0,5,1] -> [3,2,1], acting [0,5,1] -> [3,2,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.b( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.870870590s) [3,2,1] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.257812500s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.19( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.877962112s) [5,3,1] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.265136719s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,1], acting [0,5,1] -> [5,3,1], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.19( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.877962112s) [5,3,1] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.265136719s@ mbc={}] state: transitioning to Primary Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.795728683s) [1,3,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.182983398s@ mbc={}] start_peering_interval up [3,5,1] -> [1,3,2], acting [3,5,1] -> [1,3,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.1e( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.872272491s) [4,5,3] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.259521484s@ mbc={}] start_peering_interval up [0,5,1] -> [4,5,3], acting [0,5,1] -> [4,5,3], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.795681953s) [1,3,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.182983398s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.1f( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.877464294s) [3,4,5] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.264892578s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,5], acting [0,5,1] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.1e( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.872197151s) [4,5,3] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.259521484s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.1d( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.795558929s) [4,5,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.183105469s@ mbc={}] start_peering_interval up [3,5,1] -> [4,5,3], acting [3,5,1] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.1f( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.877432823s) [3,4,5] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.264892578s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.1d( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.795509338s) [4,5,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.183105469s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.1c( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.871017456s) [1,3,5] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.258789062s@ mbc={}] start_peering_interval up [0,5,1] -> [1,3,5], acting [0,5,1] -> [1,3,5], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.1e( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.794951439s) [0,5,1] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.182739258s@ mbc={}] start_peering_interval up [3,5,1] -> [0,5,1], acting [3,5,1] -> [0,5,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.1e( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.794930458s) [0,5,1] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.182739258s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.1f( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.794860840s) [4,5,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.182739258s@ mbc={}] start_peering_interval up [3,5,1] -> [4,5,3], acting [3,5,1] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.1c( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.870967865s) [1,3,5] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.258789062s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.1f( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.794834137s) [4,5,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.182739258s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.1d( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.870925903s) [3,4,5] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.258911133s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,5], acting [0,5,1] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.1d( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.870869637s) [3,4,5] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.258911133s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.794602394s) [2,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.182739258s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,1], acting [3,5,1] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.794574738s) [2,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.182739258s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.8( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.869439125s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.257812500s@ mbc={}] start_peering_interval up [0,5,1] -> [2,1,3], acting [0,5,1] -> [2,1,3], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.793379784s) [2,3,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.181884766s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,4], acting [3,5,1] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[7.7( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.870356560s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1128.258911133s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.793352127s) [2,3,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.181884766s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[7.7( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.870326042s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1128.258911133s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[7.5( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.869977951s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1128.258544922s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[7.9( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.870070457s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1128.258544922s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[7.5( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.869894028s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1128.258544922s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[7.1( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.867693901s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1128.256347656s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[7.3( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.870035172s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1128.258911133s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[7.1( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.867625237s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1128.256347656s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[7.9( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.869882584s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1128.258544922s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[7.3( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.869976044s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1128.258911133s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.8( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.869193077s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.257812500s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.787750244s) [2,1,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.176757812s@ mbc={}] start_peering_interval up [3,5,1] -> [2,1,3], acting [3,5,1] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.787722588s) [2,1,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.176757812s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.787587166s) [2,4,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.176757812s@ mbc={}] start_peering_interval up [3,5,1] -> [2,4,0], acting [3,5,1] -> [2,4,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.867554665s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1128.256591797s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.787540436s) [2,4,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.176757812s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.d( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.868709564s) [2,3,1] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1128.257934570s@ mbc={}] start_peering_interval up [0,5,1] -> [2,3,1], acting [0,5,1] -> [2,3,1], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[7.d( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.869459152s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1128.258789062s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[6.d( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.868687630s) [2,3,1] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.257934570s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.867433548s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1128.256591797s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[7.d( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.869412422s) [2,1,3] r=-1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1128.258789062s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.792197227s) [2,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.181640625s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,1], acting [3,5,1] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.792165756s) [2,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.181640625s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.779070854s) [5,4,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541503906s@ mbc={}] start_peering_interval up [4,3,2] -> [5,4,0], acting [4,3,2] -> [5,4,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.10( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.756442070s) [5,1,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.518920898s@ mbc={}] start_peering_interval up [1,2,0] -> [5,1,3], acting [1,2,0] -> [5,1,3], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.778548241s) [5,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541137695s@ mbc={}] start_peering_interval up [4,3,2] -> [5,3,1], acting [4,3,2] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[3.10( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.756377220s) [5,1,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.518920898s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.757187843s) [5,3,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.519775391s@ mbc={}] start_peering_interval up [2,1,3] -> [5,3,4], acting [2,1,3] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.778964996s) [5,4,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.541503906s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.778500557s) [5,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.541137695s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.757133484s) [5,3,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.519775391s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.778266907s) [5,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.541503906s@ mbc={}] start_peering_interval up [4,3,2] -> [5,3,1], acting [4,3,2] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.778231621s) [5,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.541503906s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.756553650s) [5,1,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.520141602s@ mbc={}] start_peering_interval up [2,1,3] -> [5,1,0], acting [2,1,3] -> [5,1,0], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.756167412s) [5,1,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.519897461s@ mbc={}] start_peering_interval up [2,1,3] -> [5,1,3], acting [2,1,3] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.756112099s) [5,1,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.519897461s@ mbc={}] state: transitioning to Stray Nov 23 03:00:43 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.756200790s) [5,1,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.520141602s@ mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0,1,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [0,5,1] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[6.10( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [0,1,2] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [0,5,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.16( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0,2,1] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [0,5,1] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [0,5,1] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [0,5,1] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.b( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,5,1] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.2( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,4,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,1,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,4,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[2.1e( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,1,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,5,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0,1,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[2.1( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,4,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.b( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0,2,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,1,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,5,4] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0,1,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.17( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,1,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.12( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,5,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,4,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[6.18( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [0,2,4] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,5,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,0,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [3,2,1] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[6.11( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [3,1,2] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,5,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[6.12( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [4,2,0] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,5,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,0,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[6.17( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [1,0,2] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[6.4( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [3,2,1] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[2.d( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,5,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[2.c( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,0,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,5,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[6.13( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [3,4,2] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,3,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[6.c( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [3,2,4] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,5,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,5,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,2,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[2.10( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,0,5] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.17( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,2,4] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,0,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,4,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,5,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,0,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.d( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,5,0] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 44 pg[5.b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,0,5] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,3,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[6.1b( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [1,2,0] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.19( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,3,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[3.f( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,1,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.15( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,3,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[3.10( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,1,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[6.5( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [5,1,0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[3.a( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[3.c( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,2,0] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 44 pg[4.6( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,3,2] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,3,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[3.14( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,4,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[3.d( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,1,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,1,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[3.1c( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,4,0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[5.1b( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,0,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[5.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[6.19( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [5,3,1] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,1,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,0,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[6.1a( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [5,4,0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[3.3( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,0,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[6.e( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [5,3,4] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[6.2( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [5,3,4] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,1,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,1,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,1,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[4.c( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[6.7( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [5,3,4] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[6.3( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [5,4,0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[6.a( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [5,0,4] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,0,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[32534]: osd.5 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,4] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,0,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,1,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,1,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[2.14( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,4,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[5.10( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,4,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[3.13( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,3,1] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[2.1a( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,4,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[7.d( v 35'39 lc 35'13 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(2+1)=2}}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[7.3( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=44/45 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=35'39 mlcod 0'0 active+degraded m=2 mbc={255={(2+1)=2}}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,3,1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[7.1( v 35'39 (0'0,35'39] local-lis/les=44/45 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,3,4] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[7.5( v 35'39 lc 35'9 (0'0,35'39] local-lis/les=44/45 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(2+1)=2}}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[7.7( v 35'39 lc 35'18 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[7.f( v 35'39 lc 35'1 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(2+1)=3}}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[6.d( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,3,1] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,1,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,1,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[7.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[7.9( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,1,3] r=0 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:44 localhost ceph-osd[31569]: osd.2 pg_epoch: 45 pg[5.1f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,4,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 23 03:00:45 localhost ceph-osd[32534]: osd.5 pg_epoch: 46 pg[7.6( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=8.888601303s) [3,1,5] r=2 lpr=46 pi=[42,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1128.256591797s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:45 localhost ceph-osd[32534]: osd.5 pg_epoch: 46 pg[7.6( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=8.888490677s) [3,1,5] r=2 lpr=46 pi=[42,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1128.256591797s@ mbc={}] state: transitioning to Stray Nov 23 03:00:45 localhost ceph-osd[32534]: osd.5 pg_epoch: 46 pg[7.a( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=8.888399124s) [3,1,5] r=2 lpr=46 pi=[42,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1128.256713867s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:45 localhost ceph-osd[32534]: osd.5 pg_epoch: 46 pg[7.a( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=8.888222694s) [3,1,5] r=2 lpr=46 pi=[42,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1128.256713867s@ mbc={}] state: transitioning to Stray Nov 23 03:00:45 localhost ceph-osd[32534]: osd.5 pg_epoch: 46 pg[7.2( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=8.887271881s) [3,1,5] r=2 lpr=46 pi=[42,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1128.256591797s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:45 localhost ceph-osd[32534]: osd.5 pg_epoch: 46 pg[7.2( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=8.887023926s) [3,1,5] r=2 lpr=46 pi=[42,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1128.256591797s@ mbc={}] state: transitioning to Stray Nov 23 03:00:45 localhost ceph-osd[32534]: osd.5 pg_epoch: 46 pg[7.e( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=8.886620522s) [3,1,5] r=2 lpr=46 pi=[42,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1128.256347656s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:45 localhost ceph-osd[32534]: osd.5 pg_epoch: 46 pg[7.e( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=8.886557579s) [3,1,5] r=2 lpr=46 pi=[42,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1128.256347656s@ mbc={}] state: transitioning to Stray Nov 23 03:00:48 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 7.c deep-scrub starts Nov 23 03:00:50 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 7.4 scrub starts Nov 23 03:00:53 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 4.c scrub starts Nov 23 03:00:53 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 4.c scrub ok Nov 23 03:00:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:00:54 localhost systemd[1]: tmp-crun.YF6xa4.mount: Deactivated successfully. Nov 23 03:00:54 localhost podman[55980]: 2025-11-23 08:00:54.899016798 +0000 UTC m=+0.086318567 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12) Nov 23 03:00:55 localhost podman[55980]: 2025-11-23 08:00:55.112396206 +0000 UTC m=+0.299697925 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, architecture=x86_64, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=metrics_qdr, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Nov 23 03:00:55 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:00:55 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 6.5 scrub starts Nov 23 03:00:55 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 6.5 scrub ok Nov 23 03:00:55 localhost ceph-osd[31569]: osd.2 pg_epoch: 48 pg[7.7( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48 pruub=12.980096817s) [3,2,4] r=1 lpr=48 pi=[44,48)/1 crt=35'39 mlcod 0'0 active pruub 1146.746826172s@ mbc={255={}}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:55 localhost ceph-osd[31569]: osd.2 pg_epoch: 48 pg[7.3( v 35'39 (0'0,35'39] local-lis/les=44/45 n=2 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48 pruub=12.977787018s) [3,2,4] r=1 lpr=48 pi=[44,48)/1 crt=35'39 mlcod 0'0 active pruub 1146.744750977s@ mbc={255={}}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:55 localhost ceph-osd[31569]: osd.2 pg_epoch: 48 pg[7.7( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48 pruub=12.979978561s) [3,2,4] r=1 lpr=48 pi=[44,48)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 1146.746826172s@ mbc={}] state: transitioning to Stray Nov 23 03:00:55 localhost ceph-osd[31569]: osd.2 pg_epoch: 48 pg[7.3( v 35'39 (0'0,35'39] local-lis/les=44/45 n=2 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48 pruub=12.977702141s) [3,2,4] r=1 lpr=48 pi=[44,48)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 1146.744750977s@ mbc={}] state: transitioning to Stray Nov 23 03:00:55 localhost ceph-osd[31569]: osd.2 pg_epoch: 48 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48 pruub=12.979398727s) [3,2,4] r=1 lpr=48 pi=[44,48)/1 crt=35'39 mlcod 0'0 active pruub 1146.746948242s@ mbc={255={}}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:55 localhost ceph-osd[31569]: osd.2 pg_epoch: 48 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48 pruub=12.979332924s) [3,2,4] r=1 lpr=48 pi=[44,48)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 1146.746948242s@ mbc={}] state: transitioning to Stray Nov 23 03:00:55 localhost ceph-osd[31569]: osd.2 pg_epoch: 48 pg[7.b( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48 pruub=12.979945183s) [3,2,4] r=1 lpr=48 pi=[44,48)/1 crt=35'39 mlcod 0'0 active pruub 1146.747802734s@ mbc={255={}}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:00:55 localhost ceph-osd[31569]: osd.2 pg_epoch: 48 pg[7.b( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48 pruub=12.979791641s) [3,2,4] r=1 lpr=48 pi=[44,48)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 1146.747802734s@ mbc={}] state: transitioning to Stray Nov 23 03:00:56 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 6.1a scrub starts Nov 23 03:00:56 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 6.1a scrub ok Nov 23 03:00:56 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts Nov 23 03:00:56 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok Nov 23 03:00:57 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 6.e scrub starts Nov 23 03:00:57 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 6.e scrub ok Nov 23 03:00:58 localhost python3[56024]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:00 localhost python3[56040]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:01 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 6.2 scrub starts Nov 23 03:01:01 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 6.2 scrub ok Nov 23 03:01:02 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 6.3 scrub starts Nov 23 03:01:02 localhost python3[56067]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:02 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 6.3 scrub ok Nov 23 03:01:02 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 3.f scrub starts Nov 23 03:01:02 localhost ceph-osd[32534]: osd.5 pg_epoch: 50 pg[7.c( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=15.710488319s) [0,5,4] r=1 lpr=50 pi=[42,50)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1152.257202148s@ TIME_FOR_DEEP mbc={}] start_peering_interval up [5,1,3] -> [0,5,4], acting [5,1,3] -> [0,5,4], acting_primary 5 -> 0, up_primary 5 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:01:02 localhost ceph-osd[32534]: osd.5 pg_epoch: 50 pg[7.4( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=15.710814476s) [0,5,4] r=1 lpr=50 pi=[42,50)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1152.257568359s@ mbc={}] start_peering_interval up [5,1,3] -> [0,5,4], acting [5,1,3] -> [0,5,4], acting_primary 5 -> 0, up_primary 5 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:01:02 localhost ceph-osd[32534]: osd.5 pg_epoch: 50 pg[7.4( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=15.710713387s) [0,5,4] r=1 lpr=50 pi=[42,50)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1152.257568359s@ mbc={}] state: transitioning to Stray Nov 23 03:01:02 localhost ceph-osd[32534]: osd.5 pg_epoch: 50 pg[7.c( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=15.710383415s) [0,5,4] r=1 lpr=50 pi=[42,50)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1152.257202148s@ TIME_FOR_DEEP mbc={}] state: transitioning to Stray Nov 23 03:01:02 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 3.f scrub ok Nov 23 03:01:03 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 4.5 scrub starts Nov 23 03:01:03 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 4.5 scrub ok Nov 23 03:01:04 localhost python3[56115]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:01:04 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 3.16 scrub starts Nov 23 03:01:04 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 3.16 scrub ok Nov 23 03:01:04 localhost python3[56158]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884864.0251937-92282-135340340953355/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=5f137984986c8cf5df5aec7749430e0dc129d0db backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:04 localhost ceph-osd[31569]: osd.2 pg_epoch: 52 pg[7.d( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=52 pruub=11.783847809s) [4,0,2] r=2 lpr=52 pi=[44,52)/1 crt=35'39 mlcod 0'0 active pruub 1154.744750977s@ mbc={255={}}] start_peering_interval up [2,1,3] -> [4,0,2], acting [2,1,3] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:01:04 localhost ceph-osd[31569]: osd.2 pg_epoch: 52 pg[7.d( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=52 pruub=11.783713341s) [4,0,2] r=2 lpr=52 pi=[44,52)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 1154.744750977s@ mbc={}] state: transitioning to Stray Nov 23 03:01:04 localhost ceph-osd[31569]: osd.2 pg_epoch: 52 pg[7.5( v 35'39 (0'0,35'39] local-lis/les=44/45 n=2 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=52 pruub=11.784072876s) [4,0,2] r=2 lpr=52 pi=[44,52)/1 crt=35'39 mlcod 0'0 active pruub 1154.745727539s@ mbc={255={}}] start_peering_interval up [2,1,3] -> [4,0,2], acting [2,1,3] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:01:04 localhost ceph-osd[31569]: osd.2 pg_epoch: 52 pg[7.5( v 35'39 (0'0,35'39] local-lis/les=44/45 n=2 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=52 pruub=11.783924103s) [4,0,2] r=2 lpr=52 pi=[44,52)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 1154.745727539s@ mbc={}] state: transitioning to Stray Nov 23 03:01:05 localhost sshd[56173]: main: sshd: ssh-rsa algorithm is disabled Nov 23 03:01:05 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts Nov 23 03:01:05 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok Nov 23 03:01:07 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 5.11 scrub starts Nov 23 03:01:07 localhost ceph-osd[32534]: osd.5 pg_epoch: 54 pg[7.e( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=11.099583626s) [0,2,4] r=-1 lpr=54 pi=[46,54)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1152.541015625s@ mbc={}] start_peering_interval up [3,1,5] -> [0,2,4], acting [3,1,5] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:01:07 localhost ceph-osd[32534]: osd.5 pg_epoch: 54 pg[7.e( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=11.099473000s) [0,2,4] r=-1 lpr=54 pi=[46,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1152.541015625s@ mbc={}] state: transitioning to Stray Nov 23 03:01:07 localhost ceph-osd[32534]: osd.5 pg_epoch: 54 pg[7.6( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=11.022383690s) [0,2,4] r=-1 lpr=54 pi=[46,54)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1152.464355469s@ mbc={}] start_peering_interval up [3,1,5] -> [0,2,4], acting [3,1,5] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:01:07 localhost ceph-osd[32534]: osd.5 pg_epoch: 54 pg[7.6( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=11.021992683s) [0,2,4] r=-1 lpr=54 pi=[46,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1152.464355469s@ mbc={}] state: transitioning to Stray Nov 23 03:01:07 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 5.11 scrub ok Nov 23 03:01:08 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 6.19 scrub starts Nov 23 03:01:08 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 6.19 scrub ok Nov 23 03:01:08 localhost ceph-osd[31569]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=54) [0,2,4] r=1 lpr=54 pi=[46,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:01:08 localhost ceph-osd[31569]: osd.2 pg_epoch: 54 pg[7.6( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=54) [0,2,4] r=1 lpr=54 pi=[46,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:01:08 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 3.13 scrub starts Nov 23 03:01:09 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 3.13 scrub ok Nov 23 03:01:09 localhost ceph-osd[31569]: osd.2 pg_epoch: 56 pg[7.7( v 35'39 (0'0,35'39] local-lis/les=48/49 n=1 ec=42/33 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=11.388533592s) [2,1,3] r=0 lpr=56 pi=[48,56)/1 luod=0'0 crt=35'39 mlcod 0'0 active pruub 1158.972534180s@ mbc={}] start_peering_interval up [3,2,4] -> [2,1,3], acting [3,2,4] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:01:09 localhost ceph-osd[31569]: osd.2 pg_epoch: 56 pg[7.7( v 35'39 (0'0,35'39] local-lis/les=48/49 n=1 ec=42/33 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=11.388533592s) [2,1,3] r=0 lpr=56 pi=[48,56)/1 crt=35'39 mlcod 0'0 unknown pruub 1158.972534180s@ mbc={}] state: transitioning to Primary Nov 23 03:01:09 localhost ceph-osd[31569]: osd.2 pg_epoch: 56 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=48/49 n=1 ec=42/33 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=11.387823105s) [2,1,3] r=0 lpr=56 pi=[48,56)/1 luod=0'0 crt=35'39 mlcod 0'0 active pruub 1158.972534180s@ mbc={}] start_peering_interval up [3,2,4] -> [2,1,3], acting [3,2,4] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:01:09 localhost ceph-osd[31569]: osd.2 pg_epoch: 56 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=48/49 n=1 ec=42/33 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=11.387823105s) [2,1,3] r=0 lpr=56 pi=[48,56)/1 crt=35'39 mlcod 0'0 unknown pruub 1158.972534180s@ mbc={}] state: transitioning to Primary Nov 23 03:01:09 localhost python3[56222]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:01:10 localhost python3[56265]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884869.3579292-92282-16643422123895/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=8a18e979d41caf333cb312628abb5051e6d0049c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:10 localhost ceph-osd[31569]: osd.2 pg_epoch: 57 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=56/57 n=1 ec=42/33 lis/c=48/48 les/c/f=49/49/0 sis=56) [2,1,3] r=0 lpr=56 pi=[48,56)/1 crt=35'39 mlcod 0'0 active+degraded mbc={255={(2+1)=3}}] state: react AllReplicasActivated Activating complete Nov 23 03:01:10 localhost ceph-osd[31569]: osd.2 pg_epoch: 57 pg[7.7( v 35'39 (0'0,35'39] local-lis/les=56/57 n=1 ec=42/33 lis/c=48/48 les/c/f=49/49/0 sis=56) [2,1,3] r=0 lpr=56 pi=[48,56)/1 crt=35'39 mlcod 0'0 active+degraded mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Nov 23 03:01:11 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 6.7 scrub starts Nov 23 03:01:11 localhost ceph-osd[32534]: osd.5 pg_epoch: 58 pg[7.8( v 35'39 (0'0,35'39] local-lis/les=42/43 n=0 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=58 pruub=15.001105309s) [3,2,1] r=-1 lpr=58 pi=[42,58)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1160.257446289s@ mbc={}] start_peering_interval up [5,1,3] -> [3,2,1], acting [5,1,3] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:01:11 localhost ceph-osd[32534]: osd.5 pg_epoch: 58 pg[7.8( v 35'39 (0'0,35'39] local-lis/les=42/43 n=0 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=58 pruub=15.000843048s) [3,2,1] r=-1 lpr=58 pi=[42,58)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1160.257446289s@ mbc={}] state: transitioning to Stray Nov 23 03:01:11 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 6.7 scrub ok Nov 23 03:01:12 localhost ceph-osd[31569]: osd.2 pg_epoch: 58 pg[7.8( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=58) [3,2,1] r=1 lpr=58 pi=[42,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:01:13 localhost ceph-osd[31569]: osd.2 pg_epoch: 60 pg[7.9( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/45/0 sis=60 pruub=11.089028358s) [0,4,2] r=2 lpr=60 pi=[44,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1162.748901367s@ mbc={}] start_peering_interval up [2,1,3] -> [0,4,2], acting [2,1,3] -> [0,4,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:01:13 localhost ceph-osd[31569]: osd.2 pg_epoch: 60 pg[7.9( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/45/0 sis=60 pruub=11.088939667s) [0,4,2] r=2 lpr=60 pi=[44,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1162.748901367s@ mbc={}] state: transitioning to Stray Nov 23 03:01:14 localhost python3[56327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:01:15 localhost python3[56370]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884874.4913092-92282-67033041904439/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=ae43e71821d6a319ccba3331b262b98567ce770b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:15 localhost ceph-osd[32534]: osd.5 pg_epoch: 62 pg[7.a( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=11.052675247s) [4,0,5] r=2 lpr=62 pi=[46,62)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1160.463989258s@ mbc={}] start_peering_interval up [3,1,5] -> [4,0,5], acting [3,1,5] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:01:15 localhost ceph-osd[32534]: osd.5 pg_epoch: 62 pg[7.a( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=11.052523613s) [4,0,5] r=2 lpr=62 pi=[46,62)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1160.463989258s@ mbc={}] state: transitioning to Stray Nov 23 03:01:16 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 3.14 scrub starts Nov 23 03:01:16 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 3.14 scrub ok Nov 23 03:01:18 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 2.14 scrub starts Nov 23 03:01:18 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 2.14 scrub ok Nov 23 03:01:19 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 6.a deep-scrub starts Nov 23 03:01:19 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 6.a deep-scrub ok Nov 23 03:01:21 localhost python3[56432]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:01:21 localhost python3[56477]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884881.2331069-92639-18456444859149/source _original_basename=tmpk54ex9lj follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:22 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 2.3 scrub starts Nov 23 03:01:22 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 2.3 scrub ok Nov 23 03:01:22 localhost python3[56539]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:01:23 localhost python3[56582]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884882.660762-92815-99058781186030/source _original_basename=tmp7yggfde0 follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:23 localhost ceph-osd[32534]: osd.5 pg_epoch: 64 pg[7.c( v 35'39 (0'0,35'39] local-lis/les=50/51 n=1 ec=42/33 lis/c=50/50 les/c/f=51/51/0 sis=64 pruub=12.218185425s) [2,3,4] r=-1 lpr=64 pi=[50,64)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1169.585693359s@ TIME_FOR_DEEP mbc={}] start_peering_interval up [0,5,4] -> [2,3,4], acting [0,5,4] -> [2,3,4], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:01:23 localhost ceph-osd[32534]: osd.5 pg_epoch: 64 pg[7.c( v 35'39 (0'0,35'39] local-lis/les=50/51 n=1 ec=42/33 lis/c=50/50 les/c/f=51/51/0 sis=64 pruub=12.217948914s) [2,3,4] r=-1 lpr=64 pi=[50,64)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1169.585693359s@ TIME_FOR_DEEP mbc={}] state: transitioning to Stray Nov 23 03:01:23 localhost ceph-osd[31569]: osd.2 pg_epoch: 64 pg[7.c( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=50/50 les/c/f=51/51/0 sis=64) [2,3,4] r=0 lpr=64 pi=[50,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 23 03:01:23 localhost python3[56612]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None Nov 23 03:01:24 localhost python3[56630]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:01:24 localhost ceph-osd[31569]: osd.2 pg_epoch: 65 pg[7.c( v 35'39 lc 35'16 (0'0,35'39] local-lis/les=64/65 n=1 ec=42/33 lis/c=50/50 les/c/f=51/51/0 sis=64) [2,3,4] r=0 lpr=64 pi=[50,64)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Nov 23 03:01:24 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 5.1 deep-scrub starts Nov 23 03:01:24 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 5.1 deep-scrub ok Nov 23 03:01:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:01:25 localhost ceph-osd[31569]: osd.2 pg_epoch: 66 pg[7.d( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=42/33 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=12.310517311s) [2,3,1] r=0 lpr=66 pi=[52,66)/1 luod=0'0 crt=35'39 mlcod 0'0 active pruub 1176.045532227s@ mbc={}] start_peering_interval up [4,0,2] -> [2,3,1], acting [4,0,2] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:01:25 localhost ceph-osd[31569]: osd.2 pg_epoch: 66 pg[7.d( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=42/33 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=12.310517311s) [2,3,1] r=0 lpr=66 pi=[52,66)/1 crt=35'39 mlcod 0'0 unknown pruub 1176.045532227s@ mbc={}] state: transitioning to Primary Nov 23 03:01:25 localhost podman[56803]: 2025-11-23 08:01:25.637823197 +0000 UTC m=+0.102722065 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z) Nov 23 03:01:25 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 2.8 scrub starts Nov 23 03:01:25 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 2.8 scrub ok Nov 23 03:01:25 localhost ansible-async_wrapper.py[56802]: Invoked with 679632143056 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884885.176919-92928-19870933371425/AnsiballZ_command.py _ Nov 23 03:01:25 localhost ansible-async_wrapper.py[56833]: Starting module and watcher Nov 23 03:01:25 localhost ansible-async_wrapper.py[56833]: Start watching 56834 (3600) Nov 23 03:01:25 localhost ansible-async_wrapper.py[56834]: Start module (56834) Nov 23 03:01:25 localhost ansible-async_wrapper.py[56802]: Return async_wrapper task started. Nov 23 03:01:25 localhost podman[56803]: 2025-11-23 08:01:25.821487042 +0000 UTC m=+0.286385880 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step1, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:01:25 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:01:25 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 2.7 scrub starts Nov 23 03:01:26 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 2.7 scrub ok Nov 23 03:01:26 localhost python3[56852]: ansible-ansible.legacy.async_status Invoked with jid=679632143056.56802 mode=status _async_dir=/tmp/.ansible_async Nov 23 03:01:26 localhost ceph-osd[31569]: osd.2 pg_epoch: 67 pg[7.d( v 35'39 (0'0,35'39] local-lis/les=66/67 n=1 ec=42/33 lis/c=52/52 les/c/f=53/53/0 sis=66) [2,3,1] r=0 lpr=66 pi=[52,66)/1 crt=35'39 mlcod 0'0 active+degraded mbc={255={(1+2)=2}}] state: react AllReplicasActivated Activating complete Nov 23 03:01:27 localhost ceph-osd[31569]: osd.2 pg_epoch: 68 pg[7.e( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=42/33 lis/c=54/54 les/c/f=55/55/0 sis=68 pruub=12.786797523s) [3,1,5] r=-1 lpr=68 pi=[54,68)/1 luod=0'0 crt=35'39 mlcod 0'0 active pruub 1178.612060547s@ mbc={}] start_peering_interval up [0,2,4] -> [3,1,5], acting [0,2,4] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:01:27 localhost ceph-osd[31569]: osd.2 pg_epoch: 68 pg[7.e( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=42/33 lis/c=54/54 les/c/f=55/55/0 sis=68 pruub=12.786701202s) [3,1,5] r=-1 lpr=68 pi=[54,68)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 1178.612060547s@ mbc={}] state: transitioning to Stray Nov 23 03:01:27 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 5.2 scrub starts Nov 23 03:01:27 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 5.2 scrub ok Nov 23 03:01:28 localhost ceph-osd[32534]: osd.5 pg_epoch: 68 pg[7.e( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=54/54 les/c/f=55/55/0 sis=68) [3,1,5] r=2 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:01:29 localhost ceph-osd[31569]: osd.2 pg_epoch: 70 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=56/57 n=1 ec=42/33 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=13.031123161s) [0,4,5] r=-1 lpr=70 pi=[56,70)/1 crt=35'39 mlcod 0'0 active pruub 1180.963989258s@ mbc={255={}}] start_peering_interval up [2,1,3] -> [0,4,5], acting [2,1,3] -> [0,4,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 23 03:01:29 localhost ceph-osd[31569]: osd.2 pg_epoch: 70 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=56/57 n=1 ec=42/33 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=13.031013489s) [0,4,5] r=-1 lpr=70 pi=[56,70)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 1180.963989258s@ mbc={}] state: transitioning to Stray Nov 23 03:01:29 localhost puppet-user[56855]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 23 03:01:29 localhost puppet-user[56855]: (file: /etc/puppet/hiera.yaml) Nov 23 03:01:29 localhost puppet-user[56855]: Warning: Undefined variable '::deploy_config_name'; Nov 23 03:01:29 localhost puppet-user[56855]: (file & line not available) Nov 23 03:01:29 localhost puppet-user[56855]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 23 03:01:29 localhost puppet-user[56855]: (file & line not available) Nov 23 03:01:29 localhost puppet-user[56855]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Nov 23 03:01:29 localhost puppet-user[56855]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Nov 23 03:01:30 localhost puppet-user[56855]: Notice: Compiled catalog for np0005532584.localdomain in environment production in 0.13 seconds Nov 23 03:01:30 localhost puppet-user[56855]: Notice: Applied catalog in 0.04 seconds Nov 23 03:01:30 localhost puppet-user[56855]: Application: Nov 23 03:01:30 localhost puppet-user[56855]: Initial environment: production Nov 23 03:01:30 localhost puppet-user[56855]: Converged environment: production Nov 23 03:01:30 localhost puppet-user[56855]: Run mode: user Nov 23 03:01:30 localhost puppet-user[56855]: Changes: Nov 23 03:01:30 localhost puppet-user[56855]: Events: Nov 23 03:01:30 localhost puppet-user[56855]: Resources: Nov 23 03:01:30 localhost puppet-user[56855]: Total: 10 Nov 23 03:01:30 localhost puppet-user[56855]: Time: Nov 23 03:01:30 localhost puppet-user[56855]: Schedule: 0.00 Nov 23 03:01:30 localhost puppet-user[56855]: File: 0.00 Nov 23 03:01:30 localhost puppet-user[56855]: Augeas: 0.01 Nov 23 03:01:30 localhost puppet-user[56855]: Exec: 0.01 Nov 23 03:01:30 localhost puppet-user[56855]: Transaction evaluation: 0.03 Nov 23 03:01:30 localhost puppet-user[56855]: Catalog application: 0.04 Nov 23 03:01:30 localhost puppet-user[56855]: Config retrieval: 0.17 Nov 23 03:01:30 localhost puppet-user[56855]: Last run: 1763884890 Nov 23 03:01:30 localhost puppet-user[56855]: Filebucket: 0.00 Nov 23 03:01:30 localhost puppet-user[56855]: Total: 0.05 Nov 23 03:01:30 localhost puppet-user[56855]: Version: Nov 23 03:01:30 localhost puppet-user[56855]: Config: 1763884889 Nov 23 03:01:30 localhost puppet-user[56855]: Puppet: 7.10.0 Nov 23 03:01:30 localhost ansible-async_wrapper.py[56834]: Module complete (56834) Nov 23 03:01:30 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 5.1c scrub starts Nov 23 03:01:30 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 5.1c scrub ok Nov 23 03:01:30 localhost ansible-async_wrapper.py[56833]: Done in kid B. Nov 23 03:01:30 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 3.c scrub starts Nov 23 03:01:30 localhost ceph-osd[32534]: osd.5 pg_epoch: 70 pg[7.f( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=56/56 les/c/f=57/57/0 sis=70) [0,4,5] r=2 lpr=70 pi=[56,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 23 03:01:30 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 3.c scrub ok Nov 23 03:01:31 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 3.3 scrub starts Nov 23 03:01:31 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 3.3 scrub ok Nov 23 03:01:32 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 5.1b scrub starts Nov 23 03:01:32 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 5.1b scrub ok Nov 23 03:01:35 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 2.1a scrub starts Nov 23 03:01:35 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 2.1a scrub ok Nov 23 03:01:36 localhost python3[57059]: ansible-ansible.legacy.async_status Invoked with jid=679632143056.56802 mode=status _async_dir=/tmp/.ansible_async Nov 23 03:01:36 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 5.18 scrub starts Nov 23 03:01:36 localhost python3[57075]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 23 03:01:36 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 5.18 scrub ok Nov 23 03:01:37 localhost python3[57091]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:01:37 localhost python3[57141]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:01:38 localhost python3[57159]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp8cdghr0_ recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 23 03:01:38 localhost python3[57189]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:39 localhost python3[57292]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Nov 23 03:01:39 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 4.13 deep-scrub starts Nov 23 03:01:39 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 4.13 deep-scrub ok Nov 23 03:01:40 localhost python3[57311]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:40 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 6.15 scrub starts Nov 23 03:01:40 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 6.15 scrub ok Nov 23 03:01:41 localhost python3[57343]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:01:41 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 5.f scrub starts Nov 23 03:01:41 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 5.f scrub ok Nov 23 03:01:41 localhost python3[57393]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:01:42 localhost python3[57411]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:42 localhost python3[57473]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:01:42 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 3.a scrub starts Nov 23 03:01:42 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 3.a scrub ok Nov 23 03:01:42 localhost python3[57491]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:43 localhost python3[57553]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:01:43 localhost python3[57571]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:44 localhost python3[57633]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:01:44 localhost python3[57651]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:45 localhost python3[57681]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:01:45 localhost systemd[1]: Reloading. Nov 23 03:01:45 localhost systemd-rc-local-generator[57702]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:01:45 localhost systemd-sysv-generator[57706]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:01:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:01:45 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 2.2 scrub starts Nov 23 03:01:45 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 2.2 scrub ok Nov 23 03:01:46 localhost python3[57767]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:01:46 localhost python3[57785]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:46 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 6.d scrub starts Nov 23 03:01:46 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 6.d scrub ok Nov 23 03:01:47 localhost python3[57847]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:01:47 localhost python3[57865]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:47 localhost python3[57895]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:01:47 localhost systemd[1]: Reloading. Nov 23 03:01:47 localhost systemd-rc-local-generator[57922]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:01:47 localhost systemd-sysv-generator[57926]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:01:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:01:48 localhost systemd[1]: Starting Create netns directory... Nov 23 03:01:48 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 23 03:01:48 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 23 03:01:48 localhost systemd[1]: Finished Create netns directory. Nov 23 03:01:48 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 4.a scrub starts Nov 23 03:01:48 localhost python3[57954]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Nov 23 03:01:48 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 4.a scrub ok Nov 23 03:01:50 localhost python3[58011]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Nov 23 03:01:50 localhost podman[58081]: 2025-11-23 08:01:50.645349559 +0000 UTC m=+0.091403680 container create c090aa1ac71eb0212ef81bce8f76085c31b7a60540fff82759a6a9469ab08975 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, config_id=tripleo_step2, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtqemud_init_logs, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}) Nov 23 03:01:50 localhost podman[58098]: 2025-11-23 08:01:50.692238671 +0000 UTC m=+0.087209448 container create c7609bc1da05a58545ecbabae25ed3898d85d9916687da443c18918bdfb14f10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step2, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute_init_log, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:01:50 localhost podman[58081]: 2025-11-23 08:01:50.595248756 +0000 UTC m=+0.041302937 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 03:01:50 localhost systemd[1]: Started libpod-conmon-c090aa1ac71eb0212ef81bce8f76085c31b7a60540fff82759a6a9469ab08975.scope. Nov 23 03:01:50 localhost systemd[1]: Started libcrun container. Nov 23 03:01:50 localhost systemd[1]: Started libpod-conmon-c7609bc1da05a58545ecbabae25ed3898d85d9916687da443c18918bdfb14f10.scope. Nov 23 03:01:50 localhost podman[58098]: 2025-11-23 08:01:50.640361022 +0000 UTC m=+0.035331819 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 23 03:01:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff58612c818b92d281d7c503b4ab2e3ed913b3ad617558ac5d656b0c174fca90/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Nov 23 03:01:50 localhost systemd[1]: Started libcrun container. Nov 23 03:01:50 localhost podman[58081]: 2025-11-23 08:01:50.752054708 +0000 UTC m=+0.198108819 container init c090aa1ac71eb0212ef81bce8f76085c31b7a60540fff82759a6a9469ab08975 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, architecture=x86_64, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud_init_logs, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Nov 23 03:01:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc7298092e843edd1007408afda0c49e162b58c616e74027b863211eb586108a/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Nov 23 03:01:50 localhost podman[58081]: 2025-11-23 08:01:50.764765918 +0000 UTC m=+0.210820019 container start c090aa1ac71eb0212ef81bce8f76085c31b7a60540fff82759a6a9469ab08975 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=nova_virtqemud_init_logs, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, release=1761123044, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, version=17.1.12) Nov 23 03:01:50 localhost python3[58011]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1763883761 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm Nov 23 03:01:50 localhost systemd[1]: libpod-c090aa1ac71eb0212ef81bce8f76085c31b7a60540fff82759a6a9469ab08975.scope: Deactivated successfully. Nov 23 03:01:50 localhost podman[58098]: 2025-11-23 08:01:50.817752221 +0000 UTC m=+0.212722988 container init c7609bc1da05a58545ecbabae25ed3898d85d9916687da443c18918bdfb14f10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, container_name=nova_compute_init_log, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step2, architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.expose-services=) Nov 23 03:01:50 localhost podman[58098]: 2025-11-23 08:01:50.828008122 +0000 UTC m=+0.222978909 container start c7609bc1da05a58545ecbabae25ed3898d85d9916687da443c18918bdfb14f10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, container_name=nova_compute_init_log, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step2, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044) Nov 23 03:01:50 localhost python3[58011]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1763883761 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova Nov 23 03:01:50 localhost systemd[1]: libpod-c7609bc1da05a58545ecbabae25ed3898d85d9916687da443c18918bdfb14f10.scope: Deactivated successfully. Nov 23 03:01:50 localhost podman[58126]: 2025-11-23 08:01:50.868750352 +0000 UTC m=+0.076194893 container died c090aa1ac71eb0212ef81bce8f76085c31b7a60540fff82759a6a9469ab08975 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtqemud_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step2) Nov 23 03:01:50 localhost podman[58148]: 2025-11-23 08:01:50.898498645 +0000 UTC m=+0.049445513 container died c7609bc1da05a58545ecbabae25ed3898d85d9916687da443c18918bdfb14f10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute_init_log, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step2, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Nov 23 03:01:50 localhost podman[58126]: 2025-11-23 08:01:50.954928597 +0000 UTC m=+0.162373098 container cleanup c090aa1ac71eb0212ef81bce8f76085c31b7a60540fff82759a6a9469ab08975 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, container_name=nova_virtqemud_init_logs, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step2, io.buildah.version=1.41.4, release=1761123044) Nov 23 03:01:50 localhost systemd[1]: libpod-conmon-c090aa1ac71eb0212ef81bce8f76085c31b7a60540fff82759a6a9469ab08975.scope: Deactivated successfully. Nov 23 03:01:51 localhost podman[58149]: 2025-11-23 08:01:51.068894743 +0000 UTC m=+0.219095528 container cleanup c7609bc1da05a58545ecbabae25ed3898d85d9916687da443c18918bdfb14f10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step2, container_name=nova_compute_init_log, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git) Nov 23 03:01:51 localhost systemd[1]: libpod-conmon-c7609bc1da05a58545ecbabae25ed3898d85d9916687da443c18918bdfb14f10.scope: Deactivated successfully. Nov 23 03:01:51 localhost podman[58278]: 2025-11-23 08:01:51.332813067 +0000 UTC m=+0.079036811 container create fa9ec3ae0ab60f1eb51ec455fea675b1eb508ae78a05d4cf71d38662d67d4223 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, config_id=tripleo_step2, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}) Nov 23 03:01:51 localhost podman[58284]: 2025-11-23 08:01:51.369551951 +0000 UTC m=+0.099679190 container create f213fea3dad5b4aa7b1edad3dcd77de6c6b5649cec47d6f832cbda02354c99c0 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step2, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=create_haproxy_wrapper, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:01:51 localhost systemd[1]: Started libpod-conmon-fa9ec3ae0ab60f1eb51ec455fea675b1eb508ae78a05d4cf71d38662d67d4223.scope. Nov 23 03:01:51 localhost podman[58278]: 2025-11-23 08:01:51.283622494 +0000 UTC m=+0.029846298 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 03:01:51 localhost systemd[1]: Started libcrun container. Nov 23 03:01:51 localhost systemd[1]: Started libpod-conmon-f213fea3dad5b4aa7b1edad3dcd77de6c6b5649cec47d6f832cbda02354c99c0.scope. Nov 23 03:01:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dbc0697cef0e6bef85d14d9d3d366e75bfed63e0c2589a8a8c9c892c41f35ba/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff) Nov 23 03:01:51 localhost podman[58278]: 2025-11-23 08:01:51.4045812 +0000 UTC m=+0.150804944 container init fa9ec3ae0ab60f1eb51ec455fea675b1eb508ae78a05d4cf71d38662d67d4223 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, architecture=x86_64, container_name=create_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step2, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Nov 23 03:01:51 localhost systemd[1]: Started libcrun container. Nov 23 03:01:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4111208bb4df9dc3ebb7e3559f6e6ef9e3a799913516b11d0a24f531e03ec5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 03:01:51 localhost podman[58278]: 2025-11-23 08:01:51.415520574 +0000 UTC m=+0.161744308 container start fa9ec3ae0ab60f1eb51ec455fea675b1eb508ae78a05d4cf71d38662d67d4223 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, architecture=x86_64, version=17.1.12, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=create_virtlogd_wrapper, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step2) Nov 23 03:01:51 localhost podman[58278]: 2025-11-23 08:01:51.416128573 +0000 UTC m=+0.162352317 container attach fa9ec3ae0ab60f1eb51ec455fea675b1eb508ae78a05d4cf71d38662d67d4223 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, container_name=create_virtlogd_wrapper, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:01:51 localhost podman[58284]: 2025-11-23 08:01:51.321033768 +0000 UTC m=+0.051161017 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Nov 23 03:01:51 localhost podman[58284]: 2025-11-23 08:01:51.421431509 +0000 UTC m=+0.151558748 container init f213fea3dad5b4aa7b1edad3dcd77de6c6b5649cec47d6f832cbda02354c99c0 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=create_haproxy_wrapper, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step2, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:01:51 localhost podman[58284]: 2025-11-23 08:01:51.426603412 +0000 UTC m=+0.156730661 container start f213fea3dad5b4aa7b1edad3dcd77de6c6b5649cec47d6f832cbda02354c99c0 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=create_haproxy_wrapper, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn) Nov 23 03:01:51 localhost podman[58284]: 2025-11-23 08:01:51.426845849 +0000 UTC m=+0.156973088 container attach f213fea3dad5b4aa7b1edad3dcd77de6c6b5649cec47d6f832cbda02354c99c0 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, container_name=create_haproxy_wrapper, release=1761123044) Nov 23 03:01:51 localhost systemd[1]: var-lib-containers-storage-overlay-bc7298092e843edd1007408afda0c49e162b58c616e74027b863211eb586108a-merged.mount: Deactivated successfully. Nov 23 03:01:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c7609bc1da05a58545ecbabae25ed3898d85d9916687da443c18918bdfb14f10-userdata-shm.mount: Deactivated successfully. Nov 23 03:01:51 localhost systemd[1]: var-lib-containers-storage-overlay-ff58612c818b92d281d7c503b4ab2e3ed913b3ad617558ac5d656b0c174fca90-merged.mount: Deactivated successfully. Nov 23 03:01:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c090aa1ac71eb0212ef81bce8f76085c31b7a60540fff82759a6a9469ab08975-userdata-shm.mount: Deactivated successfully. Nov 23 03:01:52 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 5.7 scrub starts Nov 23 03:01:52 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 5.7 scrub ok Nov 23 03:01:53 localhost ovs-vsctl[58385]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Nov 23 03:01:53 localhost systemd[1]: libpod-fa9ec3ae0ab60f1eb51ec455fea675b1eb508ae78a05d4cf71d38662d67d4223.scope: Deactivated successfully. Nov 23 03:01:53 localhost systemd[1]: libpod-fa9ec3ae0ab60f1eb51ec455fea675b1eb508ae78a05d4cf71d38662d67d4223.scope: Consumed 2.119s CPU time. Nov 23 03:01:53 localhost podman[58278]: 2025-11-23 08:01:53.566647555 +0000 UTC m=+2.312871299 container died fa9ec3ae0ab60f1eb51ec455fea675b1eb508ae78a05d4cf71d38662d67d4223 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, vcs-type=git, config_id=tripleo_step2, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, container_name=create_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:01:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fa9ec3ae0ab60f1eb51ec455fea675b1eb508ae78a05d4cf71d38662d67d4223-userdata-shm.mount: Deactivated successfully. Nov 23 03:01:53 localhost podman[58529]: 2025-11-23 08:01:53.685939528 +0000 UTC m=+0.104807000 container cleanup fa9ec3ae0ab60f1eb51ec455fea675b1eb508ae78a05d4cf71d38662d67d4223 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=create_virtlogd_wrapper, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step2, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 23 03:01:53 localhost systemd[1]: libpod-conmon-fa9ec3ae0ab60f1eb51ec455fea675b1eb508ae78a05d4cf71d38662d67d4223.scope: Deactivated successfully. Nov 23 03:01:53 localhost python3[58011]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1763883761 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper Nov 23 03:01:53 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 3.5 scrub starts Nov 23 03:01:53 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 3.5 scrub ok Nov 23 03:01:54 localhost systemd[1]: libpod-f213fea3dad5b4aa7b1edad3dcd77de6c6b5649cec47d6f832cbda02354c99c0.scope: Deactivated successfully. Nov 23 03:01:54 localhost systemd[1]: libpod-f213fea3dad5b4aa7b1edad3dcd77de6c6b5649cec47d6f832cbda02354c99c0.scope: Consumed 2.193s CPU time. Nov 23 03:01:54 localhost podman[58284]: 2025-11-23 08:01:54.60318327 +0000 UTC m=+3.333310479 container died f213fea3dad5b4aa7b1edad3dcd77de6c6b5649cec47d6f832cbda02354c99c0 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=create_haproxy_wrapper, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step2, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, version=17.1.12) Nov 23 03:01:54 localhost systemd[1]: var-lib-containers-storage-overlay-4dbc0697cef0e6bef85d14d9d3d366e75bfed63e0c2589a8a8c9c892c41f35ba-merged.mount: Deactivated successfully. Nov 23 03:01:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f213fea3dad5b4aa7b1edad3dcd77de6c6b5649cec47d6f832cbda02354c99c0-userdata-shm.mount: Deactivated successfully. Nov 23 03:01:54 localhost podman[58571]: 2025-11-23 08:01:54.699511333 +0000 UTC m=+0.083563654 container cleanup f213fea3dad5b4aa7b1edad3dcd77de6c6b5649cec47d6f832cbda02354c99c0 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vcs-type=git, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=create_haproxy_wrapper, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4) Nov 23 03:01:54 localhost systemd[1]: libpod-conmon-f213fea3dad5b4aa7b1edad3dcd77de6c6b5649cec47d6f832cbda02354c99c0.scope: Deactivated successfully. Nov 23 03:01:54 localhost python3[58011]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers Nov 23 03:01:54 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 5.16 scrub starts Nov 23 03:01:54 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 5.16 scrub ok Nov 23 03:01:55 localhost python3[58621]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:55 localhost systemd[1]: var-lib-containers-storage-overlay-9c4111208bb4df9dc3ebb7e3559f6e6ef9e3a799913516b11d0a24f531e03ec5-merged.mount: Deactivated successfully. Nov 23 03:01:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:01:56 localhost systemd[1]: tmp-crun.zGDlbI.mount: Deactivated successfully. Nov 23 03:01:56 localhost podman[58713]: 2025-11-23 08:01:56.302737986 +0000 UTC m=+0.093456915 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, release=1761123044, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:01:56 localhost podman[58713]: 2025-11-23 08:01:56.527325776 +0000 UTC m=+0.318044715 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com) Nov 23 03:01:56 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:01:56 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 5.9 scrub starts Nov 23 03:01:56 localhost python3[58770]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005532584 step=2 update_config_hash_only=False Nov 23 03:01:56 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 5.9 scrub ok Nov 23 03:01:57 localhost python3[58786]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:01:57 localhost python3[58802]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True Nov 23 03:01:59 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 5.15 scrub starts Nov 23 03:01:59 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 5.15 scrub ok Nov 23 03:02:00 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts Nov 23 03:02:00 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok Nov 23 03:02:01 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 7.1 scrub starts Nov 23 03:02:01 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 7.1 scrub ok Nov 23 03:02:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 03:02:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4898 writes, 22K keys, 4898 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4898 writes, 547 syncs, 8.95 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1493 writes, 5392 keys, 1493 commit groups, 1.0 writes per commit group, ingest: 2.36 MB, 0.00 MB/s#012Interval WAL: 1493 writes, 341 syncs, 4.38 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f46733610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f46733610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me Nov 23 03:02:02 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 3.1c scrub starts Nov 23 03:02:02 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 3.1c scrub ok Nov 23 03:02:04 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 2.17 scrub starts Nov 23 03:02:04 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 2.17 scrub ok Nov 23 03:02:05 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 3.d scrub starts Nov 23 03:02:05 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 3.d scrub ok Nov 23 03:02:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 03:02:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4633 writes, 21K keys, 4633 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4633 writes, 444 syncs, 10.43 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1386 writes, 5027 keys, 1386 commit groups, 1.0 writes per commit group, ingest: 2.27 MB, 0.00 MB/s#012Interval WAL: 1386 writes, 305 syncs, 4.54 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56035b7962d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56035b7962d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 m Nov 23 03:02:06 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 2.11 scrub starts Nov 23 03:02:06 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 2.11 scrub ok Nov 23 03:02:07 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 2.16 scrub starts Nov 23 03:02:07 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 2.16 scrub ok Nov 23 03:02:08 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 3.10 scrub starts Nov 23 03:02:08 localhost ceph-osd[32534]: log_channel(cluster) log [DBG] : 3.10 scrub ok Nov 23 03:02:08 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 6.8 deep-scrub starts Nov 23 03:02:08 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 6.8 deep-scrub ok Nov 23 03:02:09 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 4.d scrub starts Nov 23 03:02:09 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 4.d scrub ok Nov 23 03:02:11 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 4.e scrub starts Nov 23 03:02:11 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 4.e scrub ok Nov 23 03:02:12 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 4.1b scrub starts Nov 23 03:02:12 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 4.1b scrub ok Nov 23 03:02:15 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts Nov 23 03:02:15 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 4.18 deep-scrub ok Nov 23 03:02:16 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 7.7 scrub starts Nov 23 03:02:16 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 7.7 scrub ok Nov 23 03:02:19 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 7.c scrub starts Nov 23 03:02:19 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 7.c scrub ok Nov 23 03:02:20 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 7.d deep-scrub starts Nov 23 03:02:20 localhost ceph-osd[31569]: log_channel(cluster) log [DBG] : 7.d deep-scrub ok Nov 23 03:02:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:02:26 localhost podman[58803]: 2025-11-23 08:02:26.909181436 +0000 UTC m=+0.090896925 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:02:27 localhost podman[58803]: 2025-11-23 08:02:27.104244289 +0000 UTC m=+0.285959718 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, release=1761123044, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-type=git) Nov 23 03:02:27 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:02:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:02:57 localhost podman[58962]: 2025-11-23 08:02:57.902953917 +0000 UTC m=+0.087439987 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:02:58 localhost podman[58962]: 2025-11-23 08:02:58.126508733 +0000 UTC m=+0.310994783 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Nov 23 03:02:58 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:03:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:03:28 localhost systemd[1]: tmp-crun.dhd7Zb.mount: Deactivated successfully. Nov 23 03:03:28 localhost podman[58992]: 2025-11-23 08:03:28.878670983 +0000 UTC m=+0.069321293 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, vendor=Red Hat, Inc., config_id=tripleo_step1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, release=1761123044) Nov 23 03:03:29 localhost podman[58992]: 2025-11-23 08:03:29.07828868 +0000 UTC m=+0.268938980 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:46Z) Nov 23 03:03:29 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:03:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:03:59 localhost podman[59098]: 2025-11-23 08:03:59.892194864 +0000 UTC m=+0.079693654 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:04:00 localhost podman[59098]: 2025-11-23 08:04:00.076552349 +0000 UTC m=+0.264051099 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, release=1761123044, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container) Nov 23 03:04:00 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:04:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:04:30 localhost podman[59128]: 2025-11-23 08:04:30.913920043 +0000 UTC m=+0.102064594 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=) Nov 23 03:04:31 localhost podman[59128]: 2025-11-23 08:04:31.11323572 +0000 UTC m=+0.301380231 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, vcs-type=git, config_id=tripleo_step1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:04:31 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:04:36 localhost systemd[1]: tmp-crun.zosnIp.mount: Deactivated successfully. Nov 23 03:04:36 localhost podman[59255]: 2025-11-23 08:04:36.139281485 +0000 UTC m=+0.107686178 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_CLEAN=True, build-date=2025-09-24T08:57:55, vcs-type=git, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, architecture=x86_64, name=rhceph, version=7, distribution-scope=public, description=Red Hat Ceph Storage 7) Nov 23 03:04:36 localhost podman[59255]: 2025-11-23 08:04:36.241347018 +0000 UTC m=+0.209751641 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, distribution-scope=public, name=rhceph, GIT_BRANCH=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, release=553, ceph=True, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 03:05:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:05:01 localhost systemd[1]: tmp-crun.3DuPe8.mount: Deactivated successfully. Nov 23 03:05:01 localhost podman[59402]: 2025-11-23 08:05:01.910437662 +0000 UTC m=+0.087812544 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:05:02 localhost podman[59402]: 2025-11-23 08:05:02.108246043 +0000 UTC m=+0.285620905 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_id=tripleo_step1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com) Nov 23 03:05:02 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:05:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:05:32 localhost systemd[1]: tmp-crun.V9CGk0.mount: Deactivated successfully. Nov 23 03:05:32 localhost podman[59430]: 2025-11-23 08:05:32.907276766 +0000 UTC m=+0.091565606 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, tcib_managed=true, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:05:33 localhost podman[59430]: 2025-11-23 08:05:33.128526444 +0000 UTC m=+0.312815234 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_step1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1761123044, name=rhosp17/openstack-qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:05:33 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:06:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:06:03 localhost podman[59536]: 2025-11-23 08:06:03.900474059 +0000 UTC m=+0.085634961 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, architecture=x86_64, tcib_managed=true) Nov 23 03:06:04 localhost podman[59536]: 2025-11-23 08:06:04.145145903 +0000 UTC m=+0.330306845 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc.) Nov 23 03:06:04 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:06:18 localhost sshd[59565]: main: sshd: ssh-rsa algorithm is disabled Nov 23 03:06:29 localhost python3[59614]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:06:29 localhost python3[59659]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885189.0084524-98996-175447537788812/source _original_basename=tmp5wy35jky follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:06:30 localhost python3[59689]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:06:32 localhost ansible-async_wrapper.py[59861]: Invoked with 29389260440 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885191.6939917-99299-95863264611241/AnsiballZ_command.py _ Nov 23 03:06:32 localhost ansible-async_wrapper.py[59864]: Starting module and watcher Nov 23 03:06:32 localhost ansible-async_wrapper.py[59864]: Start watching 59865 (3600) Nov 23 03:06:32 localhost ansible-async_wrapper.py[59865]: Start module (59865) Nov 23 03:06:32 localhost ansible-async_wrapper.py[59861]: Return async_wrapper task started. Nov 23 03:06:32 localhost python3[59885]: ansible-ansible.legacy.async_status Invoked with jid=29389260440.59861 mode=status _async_dir=/tmp/.ansible_async Nov 23 03:06:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:06:34 localhost podman[59936]: 2025-11-23 08:06:34.896673835 +0000 UTC m=+0.083069512 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Nov 23 03:06:35 localhost podman[59936]: 2025-11-23 08:06:35.118506942 +0000 UTC m=+0.304902599 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=metrics_qdr, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:06:35 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:06:35 localhost puppet-user[59883]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 23 03:06:35 localhost puppet-user[59883]: (file: /etc/puppet/hiera.yaml) Nov 23 03:06:35 localhost puppet-user[59883]: Warning: Undefined variable '::deploy_config_name'; Nov 23 03:06:35 localhost puppet-user[59883]: (file & line not available) Nov 23 03:06:35 localhost puppet-user[59883]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 23 03:06:35 localhost puppet-user[59883]: (file & line not available) Nov 23 03:06:35 localhost puppet-user[59883]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Nov 23 03:06:36 localhost puppet-user[59883]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Nov 23 03:06:36 localhost puppet-user[59883]: Notice: Compiled catalog for np0005532584.localdomain in environment production in 0.12 seconds Nov 23 03:06:36 localhost puppet-user[59883]: Notice: Applied catalog in 0.04 seconds Nov 23 03:06:36 localhost puppet-user[59883]: Application: Nov 23 03:06:36 localhost puppet-user[59883]: Initial environment: production Nov 23 03:06:36 localhost puppet-user[59883]: Converged environment: production Nov 23 03:06:36 localhost puppet-user[59883]: Run mode: user Nov 23 03:06:36 localhost puppet-user[59883]: Changes: Nov 23 03:06:36 localhost puppet-user[59883]: Events: Nov 23 03:06:36 localhost puppet-user[59883]: Resources: Nov 23 03:06:36 localhost puppet-user[59883]: Total: 10 Nov 23 03:06:36 localhost puppet-user[59883]: Time: Nov 23 03:06:36 localhost puppet-user[59883]: Schedule: 0.00 Nov 23 03:06:36 localhost puppet-user[59883]: File: 0.00 Nov 23 03:06:36 localhost puppet-user[59883]: Exec: 0.01 Nov 23 03:06:36 localhost puppet-user[59883]: Augeas: 0.01 Nov 23 03:06:36 localhost puppet-user[59883]: Transaction evaluation: 0.03 Nov 23 03:06:36 localhost puppet-user[59883]: Catalog application: 0.04 Nov 23 03:06:36 localhost puppet-user[59883]: Config retrieval: 0.16 Nov 23 03:06:36 localhost puppet-user[59883]: Last run: 1763885196 Nov 23 03:06:36 localhost puppet-user[59883]: Filebucket: 0.00 Nov 23 03:06:36 localhost puppet-user[59883]: Total: 0.05 Nov 23 03:06:36 localhost puppet-user[59883]: Version: Nov 23 03:06:36 localhost puppet-user[59883]: Config: 1763885195 Nov 23 03:06:36 localhost puppet-user[59883]: Puppet: 7.10.0 Nov 23 03:06:36 localhost ansible-async_wrapper.py[59865]: Module complete (59865) Nov 23 03:06:37 localhost ansible-async_wrapper.py[59864]: Done in kid B. Nov 23 03:06:42 localhost python3[60119]: ansible-ansible.legacy.async_status Invoked with jid=29389260440.59861 mode=status _async_dir=/tmp/.ansible_async Nov 23 03:06:43 localhost python3[60135]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 23 03:06:43 localhost python3[60151]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:06:44 localhost python3[60201]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:06:44 localhost python3[60219]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpkc8t57mh recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 23 03:06:45 localhost python3[60249]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:06:46 localhost python3[60353]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Nov 23 03:06:47 localhost python3[60372]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:06:48 localhost python3[60404]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:06:49 localhost python3[60454]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:06:49 localhost python3[60472]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:06:49 localhost python3[60534]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:06:50 localhost python3[60552]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:06:50 localhost python3[60614]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:06:51 localhost python3[60632]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:06:51 localhost python3[60694]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:06:51 localhost python3[60712]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:06:52 localhost python3[60742]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:06:52 localhost systemd[1]: Reloading. Nov 23 03:06:52 localhost systemd-sysv-generator[60768]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:06:52 localhost systemd-rc-local-generator[60765]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:06:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:06:53 localhost python3[60828]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:06:53 localhost python3[60846]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:06:54 localhost python3[60908]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:06:54 localhost python3[60926]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:06:55 localhost python3[60956]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:06:55 localhost systemd[1]: Reloading. Nov 23 03:06:55 localhost systemd-rc-local-generator[60982]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:06:55 localhost systemd-sysv-generator[60988]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:06:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:06:55 localhost systemd[1]: Starting Create netns directory... Nov 23 03:06:55 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 23 03:06:55 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 23 03:06:55 localhost systemd[1]: Finished Create netns directory. Nov 23 03:06:55 localhost python3[61014]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Nov 23 03:06:57 localhost python3[61072]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Nov 23 03:06:58 localhost podman[61232]: 2025-11-23 08:06:58.112284589 +0000 UTC m=+0.072234947 container create 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1) Nov 23 03:06:58 localhost podman[61235]: 2025-11-23 08:06:58.149906033 +0000 UTC m=+0.097087095 container create a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., config_id=tripleo_step3, io.buildah.version=1.41.4, container_name=rsyslog, url=https://www.redhat.com, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T22:49:49Z) Nov 23 03:06:58 localhost systemd[1]: Started libpod-conmon-82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.scope. Nov 23 03:06:58 localhost podman[61264]: 2025-11-23 08:06:58.171300726 +0000 UTC m=+0.093415222 container create 376631d231b7a9b765ed159b4aff592505225e967a832b4c78eaa04286699926 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step3, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, container_name=nova_virtlogd_wrapper, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:06:58 localhost podman[61232]: 2025-11-23 08:06:58.0728897 +0000 UTC m=+0.032840108 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Nov 23 03:06:58 localhost podman[61231]: 2025-11-23 08:06:58.074755538 +0000 UTC m=+0.037204294 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Nov 23 03:06:58 localhost systemd[1]: Started libcrun container. Nov 23 03:06:58 localhost podman[61231]: 2025-11-23 08:06:58.183021898 +0000 UTC m=+0.145470584 container create 55a55e21159b21fe881adb60ea43a39f14be9792343d7ca41434253a7cefba02 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, config_id=tripleo_step3, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, tcib_managed=true, release=1761123044, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_init_log, build-date=2025-11-19T00:12:45Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Nov 23 03:06:58 localhost systemd[1]: Started libpod-conmon-a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd.scope. Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8562d6b9d1c21090ac473dd301ec4770b1bd3ad140de381119a1325c7a15d2fa/merged/scripts supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8562d6b9d1c21090ac473dd301ec4770b1bd3ad140de381119a1325c7a15d2fa/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost systemd[1]: Started libpod-conmon-376631d231b7a9b765ed159b4aff592505225e967a832b4c78eaa04286699926.scope. Nov 23 03:06:58 localhost systemd[1]: Started libcrun container. Nov 23 03:06:58 localhost systemd[1]: Started libcrun container. Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost systemd[1]: Started libpod-conmon-55a55e21159b21fe881adb60ea43a39f14be9792343d7ca41434253a7cefba02.scope. Nov 23 03:06:58 localhost podman[61235]: 2025-11-23 08:06:58.103425095 +0000 UTC m=+0.050606217 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Nov 23 03:06:58 localhost podman[61264]: 2025-11-23 08:06:58.107701917 +0000 UTC m=+0.029816453 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7788a714f783bcd626d46acd079248804e4957a18f46d9597110fed4f235e9/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7788a714f783bcd626d46acd079248804e4957a18f46d9597110fed4f235e9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7788a714f783bcd626d46acd079248804e4957a18f46d9597110fed4f235e9/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7788a714f783bcd626d46acd079248804e4957a18f46d9597110fed4f235e9/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7788a714f783bcd626d46acd079248804e4957a18f46d9597110fed4f235e9/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7788a714f783bcd626d46acd079248804e4957a18f46d9597110fed4f235e9/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7788a714f783bcd626d46acd079248804e4957a18f46d9597110fed4f235e9/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost podman[61235]: 2025-11-23 08:06:58.214028788 +0000 UTC m=+0.161209890 container init a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, url=https://www.redhat.com, name=rhosp17/openstack-rsyslog, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=rsyslog, release=1761123044, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team) Nov 23 03:06:58 localhost systemd[1]: Started libcrun container. Nov 23 03:06:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:06:58 localhost podman[61232]: 2025-11-23 08:06:58.221680745 +0000 UTC m=+0.181631123 container init 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd) Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a17eac87cb600b8517e7565ff433f093958b0f7f7912bbcd2309c83781f31a6/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost podman[61235]: 2025-11-23 08:06:58.223943145 +0000 UTC m=+0.171124247 container start a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:49Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=rsyslog, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=) Nov 23 03:06:58 localhost python3[61072]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=531adc347d750bec89c43b39996bf2b8 --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Nov 23 03:06:58 localhost podman[61231]: 2025-11-23 08:06:58.231805799 +0000 UTC m=+0.194254485 container init 55a55e21159b21fe881adb60ea43a39f14be9792343d7ca41434253a7cefba02 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12) Nov 23 03:06:58 localhost podman[61231]: 2025-11-23 08:06:58.23930531 +0000 UTC m=+0.201753996 container start 55a55e21159b21fe881adb60ea43a39f14be9792343d7ca41434253a7cefba02 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, release=1761123044, container_name=ceilometer_init_log, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Nov 23 03:06:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:06:58 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Nov 23 03:06:58 localhost python3[61072]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer Nov 23 03:06:58 localhost systemd[1]: libpod-55a55e21159b21fe881adb60ea43a39f14be9792343d7ca41434253a7cefba02.scope: Deactivated successfully. Nov 23 03:06:58 localhost podman[61232]: 2025-11-23 08:06:58.257216645 +0000 UTC m=+0.217167053 container start 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=collectd, tcib_managed=true, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044) Nov 23 03:06:58 localhost systemd[1]: Created slice User Slice of UID 0. Nov 23 03:06:58 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Nov 23 03:06:58 localhost python3[61072]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=da9a0dc7b40588672419e3ce10063e21 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Nov 23 03:06:58 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Nov 23 03:06:58 localhost podman[61264]: 2025-11-23 08:06:58.275047777 +0000 UTC m=+0.197162273 container init 376631d231b7a9b765ed159b4aff592505225e967a832b4c78eaa04286699926 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, version=17.1.12, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=nova_virtlogd_wrapper, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64) Nov 23 03:06:58 localhost systemd[1]: Starting User Manager for UID 0... Nov 23 03:06:58 localhost systemd[1]: libpod-a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd.scope: Deactivated successfully. Nov 23 03:06:58 localhost podman[61260]: 2025-11-23 08:06:58.322541497 +0000 UTC m=+0.241278989 container create 30ce9086fc1f4afee8749ae9e7b8f3e870c78b0b960fd18b27d66accff5b5d79 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_statedir_owner, io.buildah.version=1.41.4, io.openshift.expose-services=) Nov 23 03:06:58 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Nov 23 03:06:58 localhost podman[61264]: 2025-11-23 08:06:58.344869598 +0000 UTC m=+0.266984094 container start 376631d231b7a9b765ed159b4aff592505225e967a832b4c78eaa04286699926 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, release=1761123044, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=) Nov 23 03:06:58 localhost python3[61072]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=b43218eec4380850a20e0a337fdcf6cf --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 03:06:58 localhost podman[61337]: 2025-11-23 08:06:58.360237284 +0000 UTC m=+0.080764081 container died 55a55e21159b21fe881adb60ea43a39f14be9792343d7ca41434253a7cefba02 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, build-date=2025-11-19T00:12:45Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=ceilometer_init_log, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Nov 23 03:06:58 localhost podman[61260]: 2025-11-23 08:06:58.26740148 +0000 UTC m=+0.186138982 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 23 03:06:58 localhost systemd[1]: Started libpod-conmon-30ce9086fc1f4afee8749ae9e7b8f3e870c78b0b960fd18b27d66accff5b5d79.scope. Nov 23 03:06:58 localhost systemd[1]: Started libcrun container. Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5772d18a828f26c879d25d0b8106c8b9bfd3703359b5039935e3e876ecde7403/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5772d18a828f26c879d25d0b8106c8b9bfd3703359b5039935e3e876ecde7403/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5772d18a828f26c879d25d0b8106c8b9bfd3703359b5039935e3e876ecde7403/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost podman[61260]: 2025-11-23 08:06:58.390516121 +0000 UTC m=+0.309253613 container init 30ce9086fc1f4afee8749ae9e7b8f3e870c78b0b960fd18b27d66accff5b5d79 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_statedir_owner, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:06:58 localhost systemd[61360]: Queued start job for default target Main User Target. Nov 23 03:06:58 localhost systemd[61360]: Created slice User Application Slice. Nov 23 03:06:58 localhost systemd[61360]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Nov 23 03:06:58 localhost systemd[61360]: Started Daily Cleanup of User's Temporary Directories. Nov 23 03:06:58 localhost systemd[61360]: Reached target Paths. Nov 23 03:06:58 localhost systemd[61360]: Reached target Timers. Nov 23 03:06:58 localhost systemd[61360]: Starting D-Bus User Message Bus Socket... Nov 23 03:06:58 localhost systemd[61360]: Starting Create User's Volatile Files and Directories... Nov 23 03:06:58 localhost systemd[1]: libpod-30ce9086fc1f4afee8749ae9e7b8f3e870c78b0b960fd18b27d66accff5b5d79.scope: Deactivated successfully. Nov 23 03:06:58 localhost systemd[61360]: Listening on D-Bus User Message Bus Socket. Nov 23 03:06:58 localhost systemd[61360]: Reached target Sockets. Nov 23 03:06:58 localhost podman[61260]: 2025-11-23 08:06:58.449675502 +0000 UTC m=+0.368412984 container start 30ce9086fc1f4afee8749ae9e7b8f3e870c78b0b960fd18b27d66accff5b5d79 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_statedir_owner, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12) Nov 23 03:06:58 localhost podman[61260]: 2025-11-23 08:06:58.451183229 +0000 UTC m=+0.369920721 container attach 30ce9086fc1f4afee8749ae9e7b8f3e870c78b0b960fd18b27d66accff5b5d79 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, url=https://www.redhat.com, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, container_name=nova_statedir_owner, managed_by=tripleo_ansible) Nov 23 03:06:58 localhost podman[61260]: 2025-11-23 08:06:58.453701567 +0000 UTC m=+0.372439079 container died 30ce9086fc1f4afee8749ae9e7b8f3e870c78b0b960fd18b27d66accff5b5d79 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_statedir_owner, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:06:58 localhost systemd[61360]: Finished Create User's Volatile Files and Directories. Nov 23 03:06:58 localhost systemd[61360]: Reached target Basic System. Nov 23 03:06:58 localhost systemd[1]: Started User Manager for UID 0. Nov 23 03:06:58 localhost systemd[1]: Started Session c1 of User root. Nov 23 03:06:58 localhost systemd[1]: Started Session c2 of User root. Nov 23 03:06:58 localhost systemd[61360]: Reached target Main User Target. Nov 23 03:06:58 localhost systemd[61360]: Startup finished in 139ms. Nov 23 03:06:58 localhost podman[61379]: 2025-11-23 08:06:58.461422146 +0000 UTC m=+0.148431525 container died a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, build-date=2025-11-18T22:49:49Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-rsyslog, tcib_managed=true, container_name=rsyslog, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container) Nov 23 03:06:58 localhost systemd[1]: session-c1.scope: Deactivated successfully. Nov 23 03:06:58 localhost systemd[1]: session-c2.scope: Deactivated successfully. Nov 23 03:06:58 localhost podman[61379]: 2025-11-23 08:06:58.554048944 +0000 UTC m=+0.241058303 container cleanup a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., name=rhosp17/openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:06:58 localhost systemd[1]: libpod-conmon-a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd.scope: Deactivated successfully. Nov 23 03:06:58 localhost podman[61327]: 2025-11-23 08:06:58.460090795 +0000 UTC m=+0.203215211 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, container_name=collectd, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:06:58 localhost podman[61482]: 2025-11-23 08:06:58.620454518 +0000 UTC m=+0.166135323 container cleanup 30ce9086fc1f4afee8749ae9e7b8f3e870c78b0b960fd18b27d66accff5b5d79 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=nova_statedir_owner, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:06:58 localhost systemd[1]: libpod-conmon-30ce9086fc1f4afee8749ae9e7b8f3e870c78b0b960fd18b27d66accff5b5d79.scope: Deactivated successfully. Nov 23 03:06:58 localhost python3[61072]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1763883761 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py Nov 23 03:06:58 localhost podman[61330]: 2025-11-23 08:06:58.661239621 +0000 UTC m=+0.400022283 container cleanup 55a55e21159b21fe881adb60ea43a39f14be9792343d7ca41434253a7cefba02 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_init_log, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step3, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible) Nov 23 03:06:58 localhost systemd[1]: libpod-conmon-55a55e21159b21fe881adb60ea43a39f14be9792343d7ca41434253a7cefba02.scope: Deactivated successfully. Nov 23 03:06:58 localhost podman[61327]: 2025-11-23 08:06:58.698102722 +0000 UTC m=+0.441227128 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, container_name=collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:06:58 localhost podman[61327]: unhealthy Nov 23 03:06:58 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:06:58 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Failed with result 'exit-code'. Nov 23 03:06:58 localhost podman[61598]: 2025-11-23 08:06:58.815470085 +0000 UTC m=+0.078883713 container create bb1d23d516a56617282b2c2fb64be37968a6a8c8b5f0fae5a4a643cb6d0f63a3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4) Nov 23 03:06:58 localhost systemd[1]: Started libpod-conmon-bb1d23d516a56617282b2c2fb64be37968a6a8c8b5f0fae5a4a643cb6d0f63a3.scope. Nov 23 03:06:58 localhost systemd[1]: Started libcrun container. Nov 23 03:06:58 localhost podman[61598]: 2025-11-23 08:06:58.781177344 +0000 UTC m=+0.044591022 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c7afe789f8282c8bf955cc2a3d04c1e3de5492a4e87c0d5c8697c6c76b2000/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c7afe789f8282c8bf955cc2a3d04c1e3de5492a4e87c0d5c8697c6c76b2000/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c7afe789f8282c8bf955cc2a3d04c1e3de5492a4e87c0d5c8697c6c76b2000/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c7afe789f8282c8bf955cc2a3d04c1e3de5492a4e87c0d5c8697c6c76b2000/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:58 localhost podman[61598]: 2025-11-23 08:06:58.89023862 +0000 UTC m=+0.153652248 container init bb1d23d516a56617282b2c2fb64be37968a6a8c8b5f0fae5a4a643cb6d0f63a3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public) Nov 23 03:06:58 localhost podman[61598]: 2025-11-23 08:06:58.899341381 +0000 UTC m=+0.162755009 container start bb1d23d516a56617282b2c2fb64be37968a6a8c8b5f0fae5a4a643cb6d0f63a3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:06:59 localhost podman[61654]: 2025-11-23 08:06:59.094490672 +0000 UTC m=+0.095977982 container create e475c8f973c2b3dcaf28c5c432a75d191ea8288cbf71705e00702a1b348fda84 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, container_name=nova_virtsecretd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container) Nov 23 03:06:59 localhost systemd[1]: var-lib-containers-storage-overlay-91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580-merged.mount: Deactivated successfully. Nov 23 03:06:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd-userdata-shm.mount: Deactivated successfully. Nov 23 03:06:59 localhost podman[61654]: 2025-11-23 08:06:59.048004363 +0000 UTC m=+0.049491703 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 03:06:59 localhost systemd[1]: Started libpod-conmon-e475c8f973c2b3dcaf28c5c432a75d191ea8288cbf71705e00702a1b348fda84.scope. Nov 23 03:06:59 localhost systemd[1]: Started libcrun container. Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6bca165dc068117408960944f8051008cf6e2ead3af3bab76d84f596641295/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6bca165dc068117408960944f8051008cf6e2ead3af3bab76d84f596641295/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6bca165dc068117408960944f8051008cf6e2ead3af3bab76d84f596641295/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6bca165dc068117408960944f8051008cf6e2ead3af3bab76d84f596641295/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6bca165dc068117408960944f8051008cf6e2ead3af3bab76d84f596641295/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6bca165dc068117408960944f8051008cf6e2ead3af3bab76d84f596641295/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6bca165dc068117408960944f8051008cf6e2ead3af3bab76d84f596641295/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:59 localhost podman[61654]: 2025-11-23 08:06:59.182049912 +0000 UTC m=+0.183537222 container init e475c8f973c2b3dcaf28c5c432a75d191ea8288cbf71705e00702a1b348fda84 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, container_name=nova_virtsecretd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Nov 23 03:06:59 localhost systemd[1]: tmp-crun.V5gX9w.mount: Deactivated successfully. Nov 23 03:06:59 localhost podman[61654]: 2025-11-23 08:06:59.195648213 +0000 UTC m=+0.197135533 container start e475c8f973c2b3dcaf28c5c432a75d191ea8288cbf71705e00702a1b348fda84 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., container_name=nova_virtsecretd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public) Nov 23 03:06:59 localhost python3[61072]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=b43218eec4380850a20e0a337fdcf6cf --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 03:06:59 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Nov 23 03:06:59 localhost systemd[1]: Started Session c3 of User root. Nov 23 03:06:59 localhost systemd[1]: session-c3.scope: Deactivated successfully. Nov 23 03:06:59 localhost podman[61796]: 2025-11-23 08:06:59.708182078 +0000 UTC m=+0.095862709 container create a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, vcs-type=git, config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, release=1761123044, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:06:59 localhost podman[61802]: 2025-11-23 08:06:59.74346611 +0000 UTC m=+0.111380629 container create 5d7b7aabccc0d98ac1bbcd5c0ef60e8fb1d6ddf28e1fc5dff6aaa70f591e4ef2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtnodedevd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc.) Nov 23 03:06:59 localhost systemd[1]: Started libpod-conmon-a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.scope. Nov 23 03:06:59 localhost podman[61796]: 2025-11-23 08:06:59.660835732 +0000 UTC m=+0.048516393 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Nov 23 03:06:59 localhost systemd[1]: Started libcrun container. Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575692a885e4e5d5a8b1e76315957cc96af13a896db846450cad3752e5067ba2/merged/etc/target supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575692a885e4e5d5a8b1e76315957cc96af13a896db846450cad3752e5067ba2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:59 localhost podman[61802]: 2025-11-23 08:06:59.677452977 +0000 UTC m=+0.045367516 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 03:06:59 localhost systemd[1]: Started libpod-conmon-5d7b7aabccc0d98ac1bbcd5c0ef60e8fb1d6ddf28e1fc5dff6aaa70f591e4ef2.scope. Nov 23 03:06:59 localhost systemd[1]: Started libcrun container. Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddb8c8a4bfd16d5d2b9440ebf9f9bbd0f4b443343fba1edb54846ce97cdacfb/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddb8c8a4bfd16d5d2b9440ebf9f9bbd0f4b443343fba1edb54846ce97cdacfb/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddb8c8a4bfd16d5d2b9440ebf9f9bbd0f4b443343fba1edb54846ce97cdacfb/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddb8c8a4bfd16d5d2b9440ebf9f9bbd0f4b443343fba1edb54846ce97cdacfb/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddb8c8a4bfd16d5d2b9440ebf9f9bbd0f4b443343fba1edb54846ce97cdacfb/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddb8c8a4bfd16d5d2b9440ebf9f9bbd0f4b443343fba1edb54846ce97cdacfb/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddb8c8a4bfd16d5d2b9440ebf9f9bbd0f4b443343fba1edb54846ce97cdacfb/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Nov 23 03:06:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:06:59 localhost podman[61796]: 2025-11-23 08:06:59.804841609 +0000 UTC m=+0.192522230 container init a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, architecture=x86_64, container_name=iscsid, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=) Nov 23 03:06:59 localhost podman[61802]: 2025-11-23 08:06:59.807040128 +0000 UTC m=+0.174954657 container init 5d7b7aabccc0d98ac1bbcd5c0ef60e8fb1d6ddf28e1fc5dff6aaa70f591e4ef2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=nova_virtnodedevd, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com) Nov 23 03:06:59 localhost podman[61802]: 2025-11-23 08:06:59.819803912 +0000 UTC m=+0.187718431 container start 5d7b7aabccc0d98ac1bbcd5c0ef60e8fb1d6ddf28e1fc5dff6aaa70f591e4ef2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, url=https://www.redhat.com, container_name=nova_virtnodedevd, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible) Nov 23 03:06:59 localhost python3[61072]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=b43218eec4380850a20e0a337fdcf6cf --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 03:06:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:06:59 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Nov 23 03:06:59 localhost podman[61796]: 2025-11-23 08:06:59.84362665 +0000 UTC m=+0.231307271 container start a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:06:59 localhost systemd[1]: Started Session c4 of User root. Nov 23 03:06:59 localhost python3[61072]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=97cfc313337c76270fcb8497fac0e51e --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Nov 23 03:06:59 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Nov 23 03:06:59 localhost systemd[1]: Started Session c5 of User root. Nov 23 03:06:59 localhost systemd[1]: session-c4.scope: Deactivated successfully. Nov 23 03:06:59 localhost podman[61839]: 2025-11-23 08:06:59.942656755 +0000 UTC m=+0.087584182 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, architecture=x86_64, config_id=tripleo_step3, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, build-date=2025-11-18T23:44:13Z) Nov 23 03:06:59 localhost kernel: Loading iSCSI transport class v2.0-870. Nov 23 03:06:59 localhost systemd[1]: session-c5.scope: Deactivated successfully. Nov 23 03:06:59 localhost podman[61839]: 2025-11-23 08:06:59.991410875 +0000 UTC m=+0.136338312 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, url=https://www.redhat.com, release=1761123044, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, container_name=iscsid) Nov 23 03:07:00 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:07:00 localhost podman[61972]: 2025-11-23 08:07:00.431512267 +0000 UTC m=+0.084574248 container create f7f342695ef671a45e2a945017cb65f35bee7bea5c9dac6aa7c9fb96c91f222a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=nova_virtstoraged, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:07:00 localhost podman[61972]: 2025-11-23 08:07:00.385902696 +0000 UTC m=+0.038964717 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 03:07:00 localhost systemd[1]: Started libpod-conmon-f7f342695ef671a45e2a945017cb65f35bee7bea5c9dac6aa7c9fb96c91f222a.scope. Nov 23 03:07:00 localhost systemd[1]: Started libcrun container. Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905224559a7d60475da0e4c3f30d7498ad95ab3c6c514578f0762871cf74312f/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905224559a7d60475da0e4c3f30d7498ad95ab3c6c514578f0762871cf74312f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905224559a7d60475da0e4c3f30d7498ad95ab3c6c514578f0762871cf74312f/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905224559a7d60475da0e4c3f30d7498ad95ab3c6c514578f0762871cf74312f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905224559a7d60475da0e4c3f30d7498ad95ab3c6c514578f0762871cf74312f/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905224559a7d60475da0e4c3f30d7498ad95ab3c6c514578f0762871cf74312f/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905224559a7d60475da0e4c3f30d7498ad95ab3c6c514578f0762871cf74312f/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:00 localhost podman[61972]: 2025-11-23 08:07:00.531163751 +0000 UTC m=+0.184225732 container init f7f342695ef671a45e2a945017cb65f35bee7bea5c9dac6aa7c9fb96c91f222a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=nova_virtstoraged, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:07:00 localhost podman[61972]: 2025-11-23 08:07:00.542021927 +0000 UTC m=+0.195083908 container start f7f342695ef671a45e2a945017cb65f35bee7bea5c9dac6aa7c9fb96c91f222a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, container_name=nova_virtstoraged, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, url=https://www.redhat.com, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt) Nov 23 03:07:00 localhost python3[61072]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=b43218eec4380850a20e0a337fdcf6cf --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 03:07:00 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Nov 23 03:07:00 localhost systemd[1]: Started Session c6 of User root. Nov 23 03:07:00 localhost systemd[1]: session-c6.scope: Deactivated successfully. Nov 23 03:07:01 localhost podman[62075]: 2025-11-23 08:07:01.047828784 +0000 UTC m=+0.092477973 container create dc52681b099784685ed09327fd7b32bf609f0729468596a8cdc7c3c5303316f0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, container_name=nova_virtqemud, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:07:01 localhost systemd[1]: Started libpod-conmon-dc52681b099784685ed09327fd7b32bf609f0729468596a8cdc7c3c5303316f0.scope. Nov 23 03:07:01 localhost podman[62075]: 2025-11-23 08:07:01.001415667 +0000 UTC m=+0.046064906 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 03:07:01 localhost systemd[1]: Started libcrun container. Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152cc360f83de3db72d1c9916a54c9e86bf0639bd8dcb2e8e7396d7b8e52034e/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152cc360f83de3db72d1c9916a54c9e86bf0639bd8dcb2e8e7396d7b8e52034e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152cc360f83de3db72d1c9916a54c9e86bf0639bd8dcb2e8e7396d7b8e52034e/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152cc360f83de3db72d1c9916a54c9e86bf0639bd8dcb2e8e7396d7b8e52034e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152cc360f83de3db72d1c9916a54c9e86bf0639bd8dcb2e8e7396d7b8e52034e/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152cc360f83de3db72d1c9916a54c9e86bf0639bd8dcb2e8e7396d7b8e52034e/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152cc360f83de3db72d1c9916a54c9e86bf0639bd8dcb2e8e7396d7b8e52034e/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152cc360f83de3db72d1c9916a54c9e86bf0639bd8dcb2e8e7396d7b8e52034e/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:01 localhost podman[62075]: 2025-11-23 08:07:01.129381239 +0000 UTC m=+0.174030438 container init dc52681b099784685ed09327fd7b32bf609f0729468596a8cdc7c3c5303316f0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_virtqemud, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:07:01 localhost podman[62075]: 2025-11-23 08:07:01.140055379 +0000 UTC m=+0.184704578 container start dc52681b099784685ed09327fd7b32bf609f0729468596a8cdc7c3c5303316f0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, container_name=nova_virtqemud, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc.) Nov 23 03:07:01 localhost python3[61072]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=b43218eec4380850a20e0a337fdcf6cf --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 03:07:01 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Nov 23 03:07:01 localhost systemd[1]: Started Session c7 of User root. Nov 23 03:07:01 localhost systemd[1]: session-c7.scope: Deactivated successfully. Nov 23 03:07:01 localhost podman[62182]: 2025-11-23 08:07:01.66149699 +0000 UTC m=+0.093754424 container create 488b8170845f59de63384fade2a67c676fda84da43aa64baadb668ccfd838e70 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_virtproxyd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container) Nov 23 03:07:01 localhost systemd[1]: Started libpod-conmon-488b8170845f59de63384fade2a67c676fda84da43aa64baadb668ccfd838e70.scope. Nov 23 03:07:01 localhost podman[62182]: 2025-11-23 08:07:01.60692948 +0000 UTC m=+0.039186944 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 03:07:01 localhost systemd[1]: Started libcrun container. Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7c294b0b164ca1fe098d3dd445b687fad9bf3828ad00cdfc5c993d3ad1f67f/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7c294b0b164ca1fe098d3dd445b687fad9bf3828ad00cdfc5c993d3ad1f67f/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7c294b0b164ca1fe098d3dd445b687fad9bf3828ad00cdfc5c993d3ad1f67f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7c294b0b164ca1fe098d3dd445b687fad9bf3828ad00cdfc5c993d3ad1f67f/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7c294b0b164ca1fe098d3dd445b687fad9bf3828ad00cdfc5c993d3ad1f67f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7c294b0b164ca1fe098d3dd445b687fad9bf3828ad00cdfc5c993d3ad1f67f/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7c294b0b164ca1fe098d3dd445b687fad9bf3828ad00cdfc5c993d3ad1f67f/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:01 localhost podman[62182]: 2025-11-23 08:07:01.743171198 +0000 UTC m=+0.175428632 container init 488b8170845f59de63384fade2a67c676fda84da43aa64baadb668ccfd838e70 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, container_name=nova_virtproxyd, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:35:22Z, version=17.1.12) Nov 23 03:07:01 localhost podman[62182]: 2025-11-23 08:07:01.752513646 +0000 UTC m=+0.184771080 container start 488b8170845f59de63384fade2a67c676fda84da43aa64baadb668ccfd838e70 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_virtproxyd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-19T00:35:22Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com) Nov 23 03:07:01 localhost python3[61072]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=b43218eec4380850a20e0a337fdcf6cf --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 03:07:01 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Nov 23 03:07:01 localhost systemd[1]: Started Session c8 of User root. Nov 23 03:07:01 localhost systemd[1]: session-c8.scope: Deactivated successfully. Nov 23 03:07:02 localhost python3[62261]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:02 localhost python3[62277]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:02 localhost python3[62293]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:03 localhost python3[62309]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:03 localhost python3[62325]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:03 localhost python3[62341]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:03 localhost python3[62357]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:04 localhost python3[62373]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:04 localhost python3[62389]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:04 localhost python3[62405]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:07:05 localhost python3[62421]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:07:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:07:05 localhost systemd[1]: tmp-crun.LkrUpS.mount: Deactivated successfully. Nov 23 03:07:05 localhost podman[62438]: 2025-11-23 08:07:05.254264638 +0000 UTC m=+0.092858845 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=metrics_qdr, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z) Nov 23 03:07:05 localhost python3[62437]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:07:05 localhost podman[62438]: 2025-11-23 08:07:05.470293374 +0000 UTC m=+0.308887601 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-qdrouterd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Nov 23 03:07:05 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:07:05 localhost python3[62481]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:07:05 localhost python3[62497]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:07:06 localhost python3[62513]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:07:06 localhost python3[62529]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:07:06 localhost python3[62545]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:07:06 localhost python3[62561]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:07:07 localhost python3[62622]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885226.9341526-100436-227068215307096/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:08 localhost python3[62651]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885226.9341526-100436-227068215307096/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:08 localhost python3[62680]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885226.9341526-100436-227068215307096/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:09 localhost python3[62709]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885226.9341526-100436-227068215307096/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:10 localhost python3[62738]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885226.9341526-100436-227068215307096/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:10 localhost python3[62767]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885226.9341526-100436-227068215307096/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:11 localhost python3[62796]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885226.9341526-100436-227068215307096/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:11 localhost python3[62825]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885226.9341526-100436-227068215307096/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:11 localhost systemd[1]: Stopping User Manager for UID 0... Nov 23 03:07:11 localhost systemd[61360]: Activating special unit Exit the Session... Nov 23 03:07:11 localhost systemd[61360]: Stopped target Main User Target. Nov 23 03:07:11 localhost systemd[61360]: Stopped target Basic System. Nov 23 03:07:11 localhost systemd[61360]: Stopped target Paths. Nov 23 03:07:11 localhost systemd[61360]: Stopped target Sockets. Nov 23 03:07:11 localhost systemd[61360]: Stopped target Timers. Nov 23 03:07:11 localhost systemd[61360]: Stopped Daily Cleanup of User's Temporary Directories. Nov 23 03:07:11 localhost systemd[61360]: Closed D-Bus User Message Bus Socket. Nov 23 03:07:11 localhost systemd[61360]: Stopped Create User's Volatile Files and Directories. Nov 23 03:07:11 localhost systemd[61360]: Removed slice User Application Slice. Nov 23 03:07:11 localhost systemd[61360]: Reached target Shutdown. Nov 23 03:07:11 localhost systemd[61360]: Finished Exit the Session. Nov 23 03:07:11 localhost systemd[61360]: Reached target Exit the Session. Nov 23 03:07:11 localhost systemd[1]: user@0.service: Deactivated successfully. Nov 23 03:07:11 localhost systemd[1]: Stopped User Manager for UID 0. Nov 23 03:07:11 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Nov 23 03:07:12 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Nov 23 03:07:12 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Nov 23 03:07:12 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Nov 23 03:07:12 localhost systemd[1]: Removed slice User Slice of UID 0. Nov 23 03:07:12 localhost python3[62854]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885226.9341526-100436-227068215307096/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:12 localhost python3[62872]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 03:07:12 localhost systemd[1]: Reloading. Nov 23 03:07:12 localhost systemd-sysv-generator[62897]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:07:12 localhost systemd-rc-local-generator[62892]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:07:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:07:13 localhost python3[62923]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:07:13 localhost systemd[1]: Reloading. Nov 23 03:07:13 localhost systemd-rc-local-generator[62949]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:07:13 localhost systemd-sysv-generator[62954]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:07:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:07:13 localhost systemd[1]: Starting collectd container... Nov 23 03:07:13 localhost systemd[1]: Started collectd container. Nov 23 03:07:14 localhost python3[62991]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:07:14 localhost systemd[1]: Reloading. Nov 23 03:07:14 localhost systemd-sysv-generator[63020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:07:14 localhost systemd-rc-local-generator[63014]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:07:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:07:14 localhost systemd[1]: Starting iscsid container... Nov 23 03:07:14 localhost systemd[1]: Started iscsid container. Nov 23 03:07:15 localhost python3[63057]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:07:15 localhost systemd[1]: Reloading. Nov 23 03:07:15 localhost systemd-sysv-generator[63087]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:07:15 localhost systemd-rc-local-generator[63084]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:07:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:07:15 localhost systemd[1]: Starting nova_virtlogd_wrapper container... Nov 23 03:07:15 localhost systemd[1]: Started nova_virtlogd_wrapper container. Nov 23 03:07:16 localhost python3[63124]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:07:16 localhost systemd[1]: Reloading. Nov 23 03:07:16 localhost systemd-sysv-generator[63150]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:07:16 localhost systemd-rc-local-generator[63147]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:07:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:07:16 localhost systemd[1]: Starting nova_virtnodedevd container... Nov 23 03:07:17 localhost tripleo-start-podman-container[63163]: Creating additional drop-in dependency for "nova_virtnodedevd" (5d7b7aabccc0d98ac1bbcd5c0ef60e8fb1d6ddf28e1fc5dff6aaa70f591e4ef2) Nov 23 03:07:17 localhost systemd[1]: Reloading. Nov 23 03:07:17 localhost systemd-rc-local-generator[63220]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:07:17 localhost systemd-sysv-generator[63226]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:07:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:07:17 localhost systemd[1]: Started nova_virtnodedevd container. Nov 23 03:07:17 localhost python3[63247]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:07:18 localhost systemd[1]: Reloading. Nov 23 03:07:19 localhost systemd-rc-local-generator[63277]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:07:19 localhost systemd-sysv-generator[63281]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:07:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:07:19 localhost systemd[1]: Starting nova_virtproxyd container... Nov 23 03:07:19 localhost tripleo-start-podman-container[63288]: Creating additional drop-in dependency for "nova_virtproxyd" (488b8170845f59de63384fade2a67c676fda84da43aa64baadb668ccfd838e70) Nov 23 03:07:19 localhost systemd[1]: Reloading. Nov 23 03:07:19 localhost systemd-rc-local-generator[63344]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:07:19 localhost systemd-sysv-generator[63349]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:07:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:07:19 localhost systemd[1]: Started nova_virtproxyd container. Nov 23 03:07:20 localhost python3[63373]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:07:20 localhost systemd[1]: Reloading. Nov 23 03:07:20 localhost systemd-sysv-generator[63403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:07:20 localhost systemd-rc-local-generator[63397]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:07:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:07:20 localhost systemd[1]: Starting nova_virtqemud container... Nov 23 03:07:20 localhost tripleo-start-podman-container[63413]: Creating additional drop-in dependency for "nova_virtqemud" (dc52681b099784685ed09327fd7b32bf609f0729468596a8cdc7c3c5303316f0) Nov 23 03:07:20 localhost systemd[1]: Reloading. Nov 23 03:07:21 localhost systemd-sysv-generator[63473]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:07:21 localhost systemd-rc-local-generator[63470]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:07:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:07:21 localhost systemd[1]: Started nova_virtqemud container. Nov 23 03:07:21 localhost python3[63498]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:07:21 localhost systemd[1]: Reloading. Nov 23 03:07:22 localhost systemd-rc-local-generator[63522]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:07:22 localhost systemd-sysv-generator[63528]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:07:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:07:22 localhost systemd[1]: Starting nova_virtsecretd container... Nov 23 03:07:22 localhost tripleo-start-podman-container[63537]: Creating additional drop-in dependency for "nova_virtsecretd" (e475c8f973c2b3dcaf28c5c432a75d191ea8288cbf71705e00702a1b348fda84) Nov 23 03:07:22 localhost systemd[1]: Reloading. Nov 23 03:07:22 localhost systemd-sysv-generator[63601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:07:22 localhost systemd-rc-local-generator[63597]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:07:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:07:22 localhost systemd[1]: Started nova_virtsecretd container. Nov 23 03:07:23 localhost python3[63623]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:07:23 localhost systemd[1]: Reloading. Nov 23 03:07:23 localhost systemd-rc-local-generator[63651]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:07:23 localhost systemd-sysv-generator[63656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:07:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:07:23 localhost systemd[1]: Starting nova_virtstoraged container... Nov 23 03:07:24 localhost tripleo-start-podman-container[63663]: Creating additional drop-in dependency for "nova_virtstoraged" (f7f342695ef671a45e2a945017cb65f35bee7bea5c9dac6aa7c9fb96c91f222a) Nov 23 03:07:24 localhost systemd[1]: Reloading. Nov 23 03:07:24 localhost systemd-sysv-generator[63724]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:07:24 localhost systemd-rc-local-generator[63721]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:07:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:07:24 localhost systemd[1]: Started nova_virtstoraged container. Nov 23 03:07:25 localhost python3[63748]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:07:25 localhost systemd[1]: Reloading. Nov 23 03:07:25 localhost systemd-rc-local-generator[63777]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:07:25 localhost systemd-sysv-generator[63781]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:07:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:07:25 localhost systemd[1]: Starting rsyslog container... Nov 23 03:07:25 localhost systemd[1]: tmp-crun.CZula4.mount: Deactivated successfully. Nov 23 03:07:25 localhost systemd[1]: Started libcrun container. Nov 23 03:07:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:25 localhost podman[63788]: 2025-11-23 08:07:25.64499416 +0000 UTC m=+0.138817527 container init a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=rsyslog, com.redhat.component=openstack-rsyslog-container, version=17.1.12, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:07:25 localhost podman[63788]: 2025-11-23 08:07:25.655424713 +0000 UTC m=+0.149248080 container start a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-rsyslog-container, container_name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-18T22:49:49Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, batch=17.1_20251118.1) Nov 23 03:07:25 localhost podman[63788]: rsyslog Nov 23 03:07:25 localhost systemd[1]: Started rsyslog container. Nov 23 03:07:25 localhost systemd[1]: libpod-a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd.scope: Deactivated successfully. Nov 23 03:07:25 localhost podman[63823]: 2025-11-23 08:07:25.838597943 +0000 UTC m=+0.058115110 container died a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, release=1761123044, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z) Nov 23 03:07:25 localhost podman[63823]: 2025-11-23 08:07:25.866117895 +0000 UTC m=+0.085635022 container cleanup a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1761123044, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, com.redhat.component=openstack-rsyslog-container, name=rhosp17/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:07:25 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:07:25 localhost podman[63840]: 2025-11-23 08:07:25.970864827 +0000 UTC m=+0.066791818 container cleanup a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, release=1761123044, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-rsyslog, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, version=17.1.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-rsyslog-container) Nov 23 03:07:25 localhost podman[63840]: rsyslog Nov 23 03:07:25 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Nov 23 03:07:26 localhost python3[63867]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:26 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1. Nov 23 03:07:26 localhost systemd[1]: Stopped rsyslog container. Nov 23 03:07:26 localhost systemd[1]: Starting rsyslog container... Nov 23 03:07:26 localhost systemd[1]: Started libcrun container. Nov 23 03:07:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:26 localhost podman[63868]: 2025-11-23 08:07:26.435397316 +0000 UTC m=+0.122492992 container init a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, build-date=2025-11-18T22:49:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, batch=17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., container_name=rsyslog, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4) Nov 23 03:07:26 localhost podman[63868]: 2025-11-23 08:07:26.445181219 +0000 UTC m=+0.132276895 container start a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-rsyslog-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, url=https://www.redhat.com, release=1761123044, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-rsyslog, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog) Nov 23 03:07:26 localhost podman[63868]: rsyslog Nov 23 03:07:26 localhost systemd[1]: Started rsyslog container. Nov 23 03:07:26 localhost systemd[1]: libpod-a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd.scope: Deactivated successfully. Nov 23 03:07:26 localhost podman[63890]: 2025-11-23 08:07:26.610410524 +0000 UTC m=+0.055264032 container died a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T22:49:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, container_name=rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog) Nov 23 03:07:26 localhost systemd[1]: tmp-crun.JpjhC9.mount: Deactivated successfully. Nov 23 03:07:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd-userdata-shm.mount: Deactivated successfully. Nov 23 03:07:26 localhost podman[63890]: 2025-11-23 08:07:26.641383862 +0000 UTC m=+0.086237340 container cleanup a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T22:49:49Z, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, maintainer=OpenStack TripleO Team, container_name=rsyslog, name=rhosp17/openstack-rsyslog) Nov 23 03:07:26 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:07:26 localhost podman[63935]: 2025-11-23 08:07:26.728272952 +0000 UTC m=+0.062605080 container cleanup a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.component=openstack-rsyslog-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git) Nov 23 03:07:26 localhost podman[63935]: rsyslog Nov 23 03:07:26 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Nov 23 03:07:27 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2. Nov 23 03:07:27 localhost systemd[1]: Stopped rsyslog container. Nov 23 03:07:27 localhost systemd[1]: Starting rsyslog container... Nov 23 03:07:27 localhost systemd[1]: Started libcrun container. Nov 23 03:07:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:27 localhost podman[63990]: 2025-11-23 08:07:27.185404921 +0000 UTC m=+0.113816714 container init a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:49Z, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step3, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:07:27 localhost podman[63990]: 2025-11-23 08:07:27.194101041 +0000 UTC m=+0.122512894 container start a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-18T22:49:49Z) Nov 23 03:07:27 localhost podman[63990]: rsyslog Nov 23 03:07:27 localhost systemd[1]: Started rsyslog container. Nov 23 03:07:27 localhost systemd[1]: libpod-a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd.scope: Deactivated successfully. Nov 23 03:07:27 localhost podman[64028]: 2025-11-23 08:07:27.341905056 +0000 UTC m=+0.044367455 container died a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-rsyslog-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, build-date=2025-11-18T22:49:49Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_id=tripleo_step3, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:07:27 localhost podman[64028]: 2025-11-23 08:07:27.370784139 +0000 UTC m=+0.073246498 container cleanup a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T22:49:49Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:07:27 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:07:27 localhost podman[64057]: 2025-11-23 08:07:27.464938044 +0000 UTC m=+0.057415659 container cleanup a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:49Z, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-rsyslog, release=1761123044, batch=17.1_20251118.1, container_name=rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=) Nov 23 03:07:27 localhost podman[64057]: rsyslog Nov 23 03:07:27 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Nov 23 03:07:27 localhost systemd[1]: var-lib-containers-storage-overlay-91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580-merged.mount: Deactivated successfully. Nov 23 03:07:27 localhost python3[64085]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005532584 step=3 update_config_hash_only=False Nov 23 03:07:27 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3. Nov 23 03:07:27 localhost systemd[1]: Stopped rsyslog container. Nov 23 03:07:27 localhost systemd[1]: Starting rsyslog container... Nov 23 03:07:27 localhost systemd[1]: Started libcrun container. Nov 23 03:07:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:27 localhost podman[64086]: 2025-11-23 08:07:27.931900668 +0000 UTC m=+0.114287518 container init a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, config_id=tripleo_step3, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T22:49:49Z, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Nov 23 03:07:27 localhost podman[64086]: 2025-11-23 08:07:27.940895737 +0000 UTC m=+0.123282587 container start a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, architecture=x86_64, config_id=tripleo_step3, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=rsyslog, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc.) Nov 23 03:07:27 localhost podman[64086]: rsyslog Nov 23 03:07:27 localhost systemd[1]: Started rsyslog container. Nov 23 03:07:28 localhost systemd[1]: libpod-a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd.scope: Deactivated successfully. Nov 23 03:07:28 localhost podman[64106]: 2025-11-23 08:07:28.109148175 +0000 UTC m=+0.059719770 container died a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-rsyslog, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-18T22:49:49Z, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog) Nov 23 03:07:28 localhost podman[64106]: 2025-11-23 08:07:28.130364941 +0000 UTC m=+0.080936506 container cleanup a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-rsyslog, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:49Z, container_name=rsyslog, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:07:28 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:07:28 localhost podman[64134]: 2025-11-23 08:07:28.217193179 +0000 UTC m=+0.061429883 container cleanup a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:49:49Z, container_name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog) Nov 23 03:07:28 localhost podman[64134]: rsyslog Nov 23 03:07:28 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Nov 23 03:07:28 localhost python3[64132]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:07:28 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4. Nov 23 03:07:28 localhost systemd[1]: Stopped rsyslog container. Nov 23 03:07:28 localhost systemd[1]: Starting rsyslog container... Nov 23 03:07:28 localhost systemd[1]: Started libcrun container. Nov 23 03:07:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 23 03:07:28 localhost podman[64161]: 2025-11-23 08:07:28.610776082 +0000 UTC m=+0.126725644 container init a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:49Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3) Nov 23 03:07:28 localhost podman[64161]: 2025-11-23 08:07:28.6194559 +0000 UTC m=+0.135405462 container start a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_step3, build-date=2025-11-18T22:49:49Z, vcs-type=git) Nov 23 03:07:28 localhost podman[64161]: rsyslog Nov 23 03:07:28 localhost systemd[1]: Started rsyslog container. Nov 23 03:07:28 localhost python3[64162]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True Nov 23 03:07:28 localhost systemd[1]: libpod-a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd.scope: Deactivated successfully. Nov 23 03:07:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:07:28 localhost podman[64185]: 2025-11-23 08:07:28.827863691 +0000 UTC m=+0.072746323 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3) Nov 23 03:07:28 localhost podman[64185]: 2025-11-23 08:07:28.840193173 +0000 UTC m=+0.085075835 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, container_name=collectd, config_id=tripleo_step3, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:07:28 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:07:28 localhost podman[64184]: 2025-11-23 08:07:28.863584977 +0000 UTC m=+0.111608436 container died a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:49Z, io.buildah.version=1.41.4, name=rhosp17/openstack-rsyslog, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, container_name=rsyslog, maintainer=OpenStack TripleO Team, distribution-scope=public) Nov 23 03:07:28 localhost podman[64184]: 2025-11-23 08:07:28.889384315 +0000 UTC m=+0.137407734 container cleanup a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, name=rhosp17/openstack-rsyslog, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, container_name=rsyslog, io.openshift.expose-services=, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-18T22:49:49Z) Nov 23 03:07:28 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:07:28 localhost podman[64215]: 2025-11-23 08:07:28.974250513 +0000 UTC m=+0.059700770 container cleanup a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '531adc347d750bec89c43b39996bf2b8'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, build-date=2025-11-18T22:49:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, name=rhosp17/openstack-rsyslog, release=1761123044, url=https://www.redhat.com) Nov 23 03:07:28 localhost podman[64215]: rsyslog Nov 23 03:07:28 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Nov 23 03:07:29 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5. Nov 23 03:07:29 localhost systemd[1]: Stopped rsyslog container. Nov 23 03:07:29 localhost systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly. Nov 23 03:07:29 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Nov 23 03:07:29 localhost systemd[1]: Failed to start rsyslog container. Nov 23 03:07:29 localhost systemd[1]: var-lib-containers-storage-overlay-91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580-merged.mount: Deactivated successfully. Nov 23 03:07:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5df7e7f7cd6f0f5e16236888c1741fdc0015458551ba9d905395dba29fdabcd-userdata-shm.mount: Deactivated successfully. Nov 23 03:07:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:07:30 localhost systemd[1]: tmp-crun.ssjgQ2.mount: Deactivated successfully. Nov 23 03:07:30 localhost podman[64227]: 2025-11-23 08:07:30.897260186 +0000 UTC m=+0.087496600 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container) Nov 23 03:07:30 localhost podman[64227]: 2025-11-23 08:07:30.930779954 +0000 UTC m=+0.121016308 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:07:30 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:07:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:07:35 localhost podman[64246]: 2025-11-23 08:07:35.899140621 +0000 UTC m=+0.084911569 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp17/openstack-qdrouterd) Nov 23 03:07:36 localhost podman[64246]: 2025-11-23 08:07:36.112136025 +0000 UTC m=+0.297906923 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container) Nov 23 03:07:36 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:07:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:07:59 localhost podman[64352]: 2025-11-23 08:07:59.898899078 +0000 UTC m=+0.085291552 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, build-date=2025-11-18T22:51:28Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git) Nov 23 03:07:59 localhost podman[64352]: 2025-11-23 08:07:59.908118543 +0000 UTC m=+0.094510957 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3) Nov 23 03:07:59 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:08:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:08:01 localhost podman[64373]: 2025-11-23 08:08:01.902517656 +0000 UTC m=+0.084617350 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 23 03:08:01 localhost podman[64373]: 2025-11-23 08:08:01.915385954 +0000 UTC m=+0.097485618 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container) Nov 23 03:08:01 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:08:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:08:06 localhost podman[64392]: 2025-11-23 08:08:06.895969731 +0000 UTC m=+0.082945138 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible) Nov 23 03:08:07 localhost podman[64392]: 2025-11-23 08:08:07.11142326 +0000 UTC m=+0.298398617 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=metrics_qdr, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container) Nov 23 03:08:07 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:08:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:08:30 localhost podman[64421]: 2025-11-23 08:08:30.895939093 +0000 UTC m=+0.082357870 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, version=17.1.12, container_name=collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Nov 23 03:08:30 localhost podman[64421]: 2025-11-23 08:08:30.929842573 +0000 UTC m=+0.116261320 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, vcs-type=git, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, tcib_managed=true, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4) Nov 23 03:08:30 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:08:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:08:32 localhost systemd[1]: tmp-crun.FIfARB.mount: Deactivated successfully. Nov 23 03:08:32 localhost podman[64441]: 2025-11-23 08:08:32.888679075 +0000 UTC m=+0.072995000 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, architecture=x86_64, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, distribution-scope=public) Nov 23 03:08:32 localhost podman[64441]: 2025-11-23 08:08:32.897415426 +0000 UTC m=+0.081731351 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:08:32 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:08:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:08:37 localhost systemd[1]: tmp-crun.oxLKoJ.mount: Deactivated successfully. Nov 23 03:08:37 localhost podman[64460]: 2025-11-23 08:08:37.909883198 +0000 UTC m=+0.095633822 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step1, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z) Nov 23 03:08:38 localhost podman[64460]: 2025-11-23 08:08:38.10442848 +0000 UTC m=+0.290179064 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, url=https://www.redhat.com, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:08:38 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:09:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:09:01 localhost podman[64565]: 2025-11-23 08:09:01.904552396 +0000 UTC m=+0.089524412 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, distribution-scope=public, architecture=x86_64) Nov 23 03:09:01 localhost podman[64565]: 2025-11-23 08:09:01.913462163 +0000 UTC m=+0.098434149 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Nov 23 03:09:01 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:09:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:09:03 localhost podman[64585]: 2025-11-23 08:09:03.891048835 +0000 UTC m=+0.078156330 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:09:03 localhost podman[64585]: 2025-11-23 08:09:03.898894637 +0000 UTC m=+0.086002172 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, container_name=iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044) Nov 23 03:09:03 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:09:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:09:09 localhost systemd[1]: tmp-crun.Ff4Tre.mount: Deactivated successfully. Nov 23 03:09:09 localhost podman[64604]: 2025-11-23 08:09:09.307113921 +0000 UTC m=+0.089904404 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:09:09 localhost podman[64604]: 2025-11-23 08:09:09.494501801 +0000 UTC m=+0.277292274 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com) Nov 23 03:09:09 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:09:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:09:32 localhost podman[64633]: 2025-11-23 08:09:32.886387416 +0000 UTC m=+0.078029896 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., container_name=collectd, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1) Nov 23 03:09:32 localhost podman[64633]: 2025-11-23 08:09:32.925464645 +0000 UTC m=+0.117107065 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:09:32 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:09:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:09:34 localhost podman[64654]: 2025-11-23 08:09:34.897594845 +0000 UTC m=+0.082235376 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=iscsid, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1761123044, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Nov 23 03:09:34 localhost podman[64654]: 2025-11-23 08:09:34.906420768 +0000 UTC m=+0.091061229 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, distribution-scope=public, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid) Nov 23 03:09:34 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:09:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:09:39 localhost systemd[1]: tmp-crun.SbGmqf.mount: Deactivated successfully. Nov 23 03:09:39 localhost podman[64672]: 2025-11-23 08:09:39.915030107 +0000 UTC m=+0.097314332 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 23 03:09:40 localhost podman[64672]: 2025-11-23 08:09:40.079373243 +0000 UTC m=+0.261657468 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1) Nov 23 03:09:40 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:10:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:10:03 localhost podman[64779]: 2025-11-23 08:10:03.905518976 +0000 UTC m=+0.092151313 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, release=1761123044, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc.) Nov 23 03:10:03 localhost podman[64779]: 2025-11-23 08:10:03.942315154 +0000 UTC m=+0.128947491 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, tcib_managed=true) Nov 23 03:10:03 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:10:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:10:05 localhost systemd[1]: tmp-crun.qEzFfP.mount: Deactivated successfully. Nov 23 03:10:05 localhost podman[64799]: 2025-11-23 08:10:05.904571208 +0000 UTC m=+0.091163252 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, build-date=2025-11-18T23:44:13Z, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:10:05 localhost podman[64799]: 2025-11-23 08:10:05.914426783 +0000 UTC m=+0.101018857 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, vcs-type=git, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com) Nov 23 03:10:05 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:10:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:10:10 localhost podman[64818]: 2025-11-23 08:10:10.901190216 +0000 UTC m=+0.087624263 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=metrics_qdr, release=1761123044, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Nov 23 03:10:11 localhost podman[64818]: 2025-11-23 08:10:11.116575191 +0000 UTC m=+0.303009238 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, tcib_managed=true, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 23 03:10:11 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:10:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:10:34 localhost podman[64847]: 2025-11-23 08:10:34.89795999 +0000 UTC m=+0.082197815 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12) Nov 23 03:10:34 localhost podman[64847]: 2025-11-23 08:10:34.909731304 +0000 UTC m=+0.093969169 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:10:34 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:10:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:10:36 localhost systemd[1]: tmp-crun.adsqaj.mount: Deactivated successfully. Nov 23 03:10:36 localhost podman[64866]: 2025-11-23 08:10:36.902326657 +0000 UTC m=+0.093764872 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, url=https://www.redhat.com, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:10:36 localhost podman[64866]: 2025-11-23 08:10:36.910538992 +0000 UTC m=+0.101977177 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_id=tripleo_step3, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid) Nov 23 03:10:36 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:10:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:10:41 localhost podman[64885]: 2025-11-23 08:10:41.902101683 +0000 UTC m=+0.084661981 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, vcs-type=git, container_name=metrics_qdr) Nov 23 03:10:42 localhost podman[64885]: 2025-11-23 08:10:42.089683078 +0000 UTC m=+0.272243306 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:10:42 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:11:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:11:05 localhost systemd[1]: tmp-crun.7cZlWD.mount: Deactivated successfully. Nov 23 03:11:05 localhost podman[65040]: 2025-11-23 08:11:05.893654835 +0000 UTC m=+0.083827795 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, build-date=2025-11-18T22:51:28Z, tcib_managed=true, com.redhat.component=openstack-collectd-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:11:05 localhost podman[65040]: 2025-11-23 08:11:05.900699173 +0000 UTC m=+0.090872083 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, container_name=collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:11:05 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:11:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:11:07 localhost systemd[1]: tmp-crun.thl5Kp.mount: Deactivated successfully. Nov 23 03:11:07 localhost podman[65060]: 2025-11-23 08:11:07.892397919 +0000 UTC m=+0.082211615 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:11:07 localhost podman[65060]: 2025-11-23 08:11:07.908481796 +0000 UTC m=+0.098295482 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, release=1761123044, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:11:07 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:11:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:11:12 localhost podman[65079]: 2025-11-23 08:11:12.899671666 +0000 UTC m=+0.082459332 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4) Nov 23 03:11:13 localhost podman[65079]: 2025-11-23 08:11:13.116221697 +0000 UTC m=+0.299009363 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true) Nov 23 03:11:13 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:11:24 localhost python3[65155]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:11:24 localhost python3[65200]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885484.2479033-107578-61017508075417/source _original_basename=tmp_ry_ujig follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:11:26 localhost python3[65262]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:11:26 localhost python3[65305]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885485.6863074-107667-65325379929362/source _original_basename=tmphwzsy272 follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:11:26 localhost python3[65367]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:11:27 localhost python3[65410]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885486.5721717-107764-3853216694309/source _original_basename=tmpjocmkhgh follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:11:27 localhost python3[65472]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:11:28 localhost python3[65515]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885487.4925544-107823-107003325644577/source _original_basename=tmpoybr2y1h follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:11:28 localhost python3[65545]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None Nov 23 03:11:28 localhost systemd[1]: Reloading. Nov 23 03:11:28 localhost systemd-rc-local-generator[65570]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:11:28 localhost systemd-sysv-generator[65574]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:11:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:11:29 localhost systemd[1]: Reloading. Nov 23 03:11:29 localhost systemd-sysv-generator[65609]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:11:29 localhost systemd-rc-local-generator[65606]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:11:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:11:29 localhost python3[65635]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:11:29 localhost systemd[1]: Reloading. Nov 23 03:11:29 localhost systemd-rc-local-generator[65660]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:11:29 localhost systemd-sysv-generator[65666]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:11:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:11:30 localhost systemd[1]: Reloading. Nov 23 03:11:30 localhost systemd-sysv-generator[65705]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:11:30 localhost systemd-rc-local-generator[65699]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:11:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:11:30 localhost systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m. Nov 23 03:11:30 localhost python3[65727]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 03:11:30 localhost systemd[1]: Reloading. Nov 23 03:11:30 localhost systemd-rc-local-generator[65753]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:11:30 localhost systemd-sysv-generator[65758]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:11:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:11:31 localhost python3[65811]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:11:32 localhost python3[65854]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885491.2669678-107956-53450907393818/source _original_basename=tmp9yrz0ijb follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:11:32 localhost python3[65884]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:11:32 localhost systemd[1]: Reloading. Nov 23 03:11:32 localhost systemd-sysv-generator[65912]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:11:32 localhost systemd-rc-local-generator[65908]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:11:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:11:32 localhost systemd[1]: Reached target tripleo_nova_libvirt.target. Nov 23 03:11:33 localhost python3[65939]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:11:34 localhost sshd[66054]: main: sshd: ssh-rsa algorithm is disabled Nov 23 03:11:34 localhost ansible-async_wrapper.py[66112]: Invoked with 743794509084 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885494.3741748-108075-193168537250645/AnsiballZ_command.py _ Nov 23 03:11:34 localhost ansible-async_wrapper.py[66115]: Starting module and watcher Nov 23 03:11:34 localhost ansible-async_wrapper.py[66115]: Start watching 66116 (3600) Nov 23 03:11:34 localhost ansible-async_wrapper.py[66116]: Start module (66116) Nov 23 03:11:34 localhost ansible-async_wrapper.py[66112]: Return async_wrapper task started. Nov 23 03:11:35 localhost python3[66136]: ansible-ansible.legacy.async_status Invoked with jid=743794509084.66112 mode=status _async_dir=/tmp/.ansible_async Nov 23 03:11:36 localhost sshd[66154]: main: sshd: ssh-rsa algorithm is disabled Nov 23 03:11:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:11:36 localhost podman[66157]: 2025-11-23 08:11:36.888311898 +0000 UTC m=+0.084506456 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step3, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, container_name=collectd, release=1761123044, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible) Nov 23 03:11:36 localhost podman[66157]: 2025-11-23 08:11:36.902415584 +0000 UTC m=+0.098610202 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:11:36 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:11:38 localhost puppet-user[66120]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 23 03:11:38 localhost puppet-user[66120]: (file: /etc/puppet/hiera.yaml) Nov 23 03:11:38 localhost puppet-user[66120]: Warning: Undefined variable '::deploy_config_name'; Nov 23 03:11:38 localhost puppet-user[66120]: (file & line not available) Nov 23 03:11:38 localhost puppet-user[66120]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 23 03:11:38 localhost puppet-user[66120]: (file & line not available) Nov 23 03:11:38 localhost puppet-user[66120]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Nov 23 03:11:38 localhost puppet-user[66120]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 23 03:11:38 localhost puppet-user[66120]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 23 03:11:38 localhost puppet-user[66120]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 23 03:11:38 localhost puppet-user[66120]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 23 03:11:38 localhost puppet-user[66120]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 23 03:11:38 localhost puppet-user[66120]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 23 03:11:38 localhost puppet-user[66120]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 23 03:11:38 localhost puppet-user[66120]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 23 03:11:38 localhost puppet-user[66120]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 23 03:11:38 localhost puppet-user[66120]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 23 03:11:38 localhost puppet-user[66120]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 23 03:11:38 localhost puppet-user[66120]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 23 03:11:38 localhost puppet-user[66120]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 23 03:11:38 localhost puppet-user[66120]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 23 03:11:38 localhost puppet-user[66120]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 23 03:11:38 localhost puppet-user[66120]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 23 03:11:38 localhost puppet-user[66120]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 23 03:11:38 localhost puppet-user[66120]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Nov 23 03:11:38 localhost puppet-user[66120]: Notice: Compiled catalog for np0005532584.localdomain in environment production in 0.21 seconds Nov 23 03:11:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:11:38 localhost systemd[1]: tmp-crun.KIZRAW.mount: Deactivated successfully. Nov 23 03:11:38 localhost podman[66275]: 2025-11-23 08:11:38.900732986 +0000 UTC m=+0.084309301 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step3, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 23 03:11:38 localhost podman[66275]: 2025-11-23 08:11:38.936512583 +0000 UTC m=+0.120088838 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid) Nov 23 03:11:38 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:11:39 localhost ansible-async_wrapper.py[66115]: 66116 still running (3600) Nov 23 03:11:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:11:43 localhost systemd[1]: tmp-crun.uvrmAe.mount: Deactivated successfully. Nov 23 03:11:43 localhost podman[66303]: 2025-11-23 08:11:43.906949491 +0000 UTC m=+0.088877641 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, container_name=metrics_qdr) Nov 23 03:11:44 localhost podman[66303]: 2025-11-23 08:11:44.128422025 +0000 UTC m=+0.310350185 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, vcs-type=git, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, architecture=x86_64) Nov 23 03:11:44 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:11:44 localhost ansible-async_wrapper.py[66115]: 66116 still running (3595) Nov 23 03:11:45 localhost python3[66404]: ansible-ansible.legacy.async_status Invoked with jid=743794509084.66112 mode=status _async_dir=/tmp/.ansible_async Nov 23 03:11:47 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 03:11:47 localhost systemd[1]: Starting man-db-cache-update.service... Nov 23 03:11:47 localhost systemd[1]: Reloading. Nov 23 03:11:47 localhost systemd-rc-local-generator[66526]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:11:47 localhost systemd-sysv-generator[66529]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:11:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:11:47 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 23 03:11:48 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 23 03:11:48 localhost systemd[1]: Finished man-db-cache-update.service. Nov 23 03:11:48 localhost systemd[1]: man-db-cache-update.service: Consumed 1.247s CPU time. Nov 23 03:11:48 localhost systemd[1]: run-rdf8592f4238040328fd16739360059b8.service: Deactivated successfully. Nov 23 03:11:48 localhost puppet-user[66120]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created Nov 23 03:11:48 localhost puppet-user[66120]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}21f879f0884d6ddd27e1e91a98437d2cecee91859f1fc8c389aa296003f63bc6' Nov 23 03:11:48 localhost puppet-user[66120]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd' Nov 23 03:11:48 localhost puppet-user[66120]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea' Nov 23 03:11:48 localhost puppet-user[66120]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97' Nov 23 03:11:48 localhost puppet-user[66120]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events Nov 23 03:11:49 localhost ansible-async_wrapper.py[66115]: 66116 still running (3590) Nov 23 03:11:50 localhost sshd[67506]: main: sshd: ssh-rsa algorithm is disabled Nov 23 03:11:53 localhost puppet-user[66120]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully Nov 23 03:11:54 localhost systemd[1]: Reloading. Nov 23 03:11:54 localhost systemd-rc-local-generator[67595]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:11:54 localhost systemd-sysv-generator[67599]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:11:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:11:54 localhost systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon.... Nov 23 03:11:54 localhost snmpd[67609]: Can't find directory of RPM packages Nov 23 03:11:54 localhost snmpd[67609]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB Nov 23 03:11:54 localhost systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon.. Nov 23 03:11:54 localhost systemd[1]: Reloading. Nov 23 03:11:54 localhost systemd-rc-local-generator[67632]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:11:54 localhost systemd-sysv-generator[67636]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:11:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:11:54 localhost systemd[1]: Reloading. Nov 23 03:11:54 localhost systemd-rc-local-generator[67673]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:11:54 localhost systemd-sysv-generator[67676]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:11:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:11:54 localhost ansible-async_wrapper.py[66115]: 66116 still running (3585) Nov 23 03:11:55 localhost puppet-user[66120]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running' Nov 23 03:11:55 localhost puppet-user[66120]: Notice: Applied catalog in 16.48 seconds Nov 23 03:11:55 localhost puppet-user[66120]: Application: Nov 23 03:11:55 localhost puppet-user[66120]: Initial environment: production Nov 23 03:11:55 localhost puppet-user[66120]: Converged environment: production Nov 23 03:11:55 localhost puppet-user[66120]: Run mode: user Nov 23 03:11:55 localhost puppet-user[66120]: Changes: Nov 23 03:11:55 localhost puppet-user[66120]: Total: 8 Nov 23 03:11:55 localhost puppet-user[66120]: Events: Nov 23 03:11:55 localhost puppet-user[66120]: Success: 8 Nov 23 03:11:55 localhost puppet-user[66120]: Total: 8 Nov 23 03:11:55 localhost puppet-user[66120]: Resources: Nov 23 03:11:55 localhost puppet-user[66120]: Restarted: 1 Nov 23 03:11:55 localhost puppet-user[66120]: Changed: 8 Nov 23 03:11:55 localhost puppet-user[66120]: Out of sync: 8 Nov 23 03:11:55 localhost puppet-user[66120]: Total: 19 Nov 23 03:11:55 localhost puppet-user[66120]: Time: Nov 23 03:11:55 localhost puppet-user[66120]: Schedule: 0.00 Nov 23 03:11:55 localhost puppet-user[66120]: Augeas: 0.01 Nov 23 03:11:55 localhost puppet-user[66120]: File: 0.09 Nov 23 03:11:55 localhost puppet-user[66120]: Config retrieval: 0.28 Nov 23 03:11:55 localhost puppet-user[66120]: Service: 1.16 Nov 23 03:11:55 localhost puppet-user[66120]: Transaction evaluation: 16.47 Nov 23 03:11:55 localhost puppet-user[66120]: Catalog application: 16.48 Nov 23 03:11:55 localhost puppet-user[66120]: Last run: 1763885515 Nov 23 03:11:55 localhost puppet-user[66120]: Exec: 5.06 Nov 23 03:11:55 localhost puppet-user[66120]: Filebucket: 0.00 Nov 23 03:11:55 localhost puppet-user[66120]: Package: 9.97 Nov 23 03:11:55 localhost puppet-user[66120]: Total: 16.48 Nov 23 03:11:55 localhost puppet-user[66120]: Version: Nov 23 03:11:55 localhost puppet-user[66120]: Config: 1763885498 Nov 23 03:11:55 localhost puppet-user[66120]: Puppet: 7.10.0 Nov 23 03:11:55 localhost ansible-async_wrapper.py[66116]: Module complete (66116) Nov 23 03:11:55 localhost python3[67712]: ansible-ansible.legacy.async_status Invoked with jid=743794509084.66112 mode=status _async_dir=/tmp/.ansible_async Nov 23 03:11:56 localhost python3[67728]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 23 03:11:56 localhost python3[67744]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:11:57 localhost python3[67794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:11:57 localhost python3[67812]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp1_t5qd8g recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 23 03:11:58 localhost python3[67842]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:11:59 localhost python3[67945]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Nov 23 03:11:59 localhost ansible-async_wrapper.py[66115]: Done in kid B. Nov 23 03:12:00 localhost python3[67964]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:01 localhost python3[67996]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:12:01 localhost python3[68046]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:12:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 03:12:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4944 writes, 22K keys, 4944 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4944 writes, 570 syncs, 8.67 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 46 writes, 60 keys, 46 commit groups, 1.0 writes per commit group, ingest: 0.02 MB, 0.00 MB/s#012Interval WAL: 46 writes, 23 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 03:12:02 localhost python3[68064]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:02 localhost python3[68126]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:12:02 localhost python3[68144]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:03 localhost python3[68206]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:12:03 localhost python3[68224]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:04 localhost python3[68286]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:12:04 localhost python3[68304]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:04 localhost python3[68334]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:12:04 localhost systemd[1]: Reloading. Nov 23 03:12:05 localhost systemd-rc-local-generator[68355]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:12:05 localhost systemd-sysv-generator[68360]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:12:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:12:05 localhost python3[68420]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:12:06 localhost python3[68438]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 03:12:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4667 writes, 21K keys, 4667 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4667 writes, 461 syncs, 10.12 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 34 writes, 56 keys, 34 commit groups, 1.0 writes per commit group, ingest: 0.02 MB, 0.00 MB/s#012Interval WAL: 34 writes, 17 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 03:12:06 localhost python3[68500]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:12:06 localhost python3[68518]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:12:07 localhost systemd[1]: tmp-crun.sttgu1.mount: Deactivated successfully. Nov 23 03:12:07 localhost podman[68548]: 2025-11-23 08:12:07.308291117 +0000 UTC m=+0.099978755 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, container_name=collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container) Nov 23 03:12:07 localhost podman[68548]: 2025-11-23 08:12:07.323407684 +0000 UTC m=+0.115095302 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, version=17.1.12, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Nov 23 03:12:07 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:12:07 localhost python3[68549]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:12:07 localhost systemd[1]: Reloading. Nov 23 03:12:07 localhost systemd-rc-local-generator[68595]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:12:07 localhost systemd-sysv-generator[68598]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:12:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:12:07 localhost systemd[1]: Starting Create netns directory... Nov 23 03:12:07 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 23 03:12:07 localhost systemd[1]: Finished Create netns directory. Nov 23 03:12:08 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 23 03:12:08 localhost python3[68626]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Nov 23 03:12:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:12:09 localhost systemd[1]: tmp-crun.Yzq7ZZ.mount: Deactivated successfully. Nov 23 03:12:09 localhost podman[68670]: 2025-11-23 08:12:09.928584875 +0000 UTC m=+0.110531591 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z) Nov 23 03:12:09 localhost podman[68670]: 2025-11-23 08:12:09.939191944 +0000 UTC m=+0.121138660 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, container_name=iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12) Nov 23 03:12:09 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:12:10 localhost python3[68704]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Nov 23 03:12:10 localhost podman[68860]: 2025-11-23 08:12:10.572610346 +0000 UTC m=+0.073124544 container create 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:12:10 localhost podman[68870]: 2025-11-23 08:12:10.607283819 +0000 UTC m=+0.092931838 container create de86cae9b19410386ddac3de8333d9c9db5dce3c16a24aadaa0caa6077d6d840 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, container_name=nova_libvirt_init_secret, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com) Nov 23 03:12:10 localhost systemd[1]: Started libpod-conmon-131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.scope. Nov 23 03:12:10 localhost systemd[1]: Started libpod-conmon-de86cae9b19410386ddac3de8333d9c9db5dce3c16a24aadaa0caa6077d6d840.scope. Nov 23 03:12:10 localhost podman[68860]: 2025-11-23 08:12:10.53491632 +0000 UTC m=+0.035430538 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Nov 23 03:12:10 localhost systemd[1]: Started libcrun container. Nov 23 03:12:10 localhost podman[68921]: 2025-11-23 08:12:10.643526061 +0000 UTC m=+0.072220156 container create c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, name=rhosp17/openstack-cron, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Nov 23 03:12:10 localhost podman[68869]: 2025-11-23 08:12:10.545715494 +0000 UTC m=+0.037106970 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Nov 23 03:12:10 localhost systemd[1]: Started libcrun container. Nov 23 03:12:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0121206d5651924911ca1f3a8713f12deef5f18ac2e527773cfa01e24243f70/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Nov 23 03:12:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ec08b5fa040dfb6133696a337502d9c613a89061aa958f367079d48924e617/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:12:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ec08b5fa040dfb6133696a337502d9c613a89061aa958f367079d48924e617/merged/etc/nova supports timestamps until 2038 (0x7fffffff) Nov 23 03:12:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ec08b5fa040dfb6133696a337502d9c613a89061aa958f367079d48924e617/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:12:10 localhost podman[68870]: 2025-11-23 08:12:10.659131743 +0000 UTC m=+0.144779772 container init de86cae9b19410386ddac3de8333d9c9db5dce3c16a24aadaa0caa6077d6d840 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1, container_name=nova_libvirt_init_secret, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible) Nov 23 03:12:10 localhost podman[68870]: 2025-11-23 08:12:10.565549198 +0000 UTC m=+0.051197207 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 23 03:12:10 localhost podman[68870]: 2025-11-23 08:12:10.672633311 +0000 UTC m=+0.158281320 container start de86cae9b19410386ddac3de8333d9c9db5dce3c16a24aadaa0caa6077d6d840 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=nova_libvirt_init_secret, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, batch=17.1_20251118.1) Nov 23 03:12:10 localhost podman[68870]: 2025-11-23 08:12:10.672968151 +0000 UTC m=+0.158616160 container attach de86cae9b19410386ddac3de8333d9c9db5dce3c16a24aadaa0caa6077d6d840 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_libvirt_init_secret, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:12:10 localhost systemd[1]: Started libpod-conmon-c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.scope. Nov 23 03:12:10 localhost systemd[1]: Started libcrun container. Nov 23 03:12:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c555bb6d05f3d1ef69b807da8d7b417226dccb2e4af3d5892e31108d455684e/merged/var/log/containers supports timestamps until 2038 (0x7fffffff) Nov 23 03:12:10 localhost podman[68861]: 2025-11-23 08:12:10.696762268 +0000 UTC m=+0.190648791 container create 0bb7f020b2feb1da86ce96d77a14ce6c3e55e8cc132cea441239bdaa9b52c262 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, url=https://www.redhat.com, container_name=configure_cms_options, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1) Nov 23 03:12:10 localhost podman[68921]: 2025-11-23 08:12:10.613212973 +0000 UTC m=+0.041907088 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Nov 23 03:12:10 localhost podman[68869]: 2025-11-23 08:12:10.721268106 +0000 UTC m=+0.212659532 container create 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, container_name=ceilometer_agent_ipmi) Nov 23 03:12:10 localhost systemd[1]: Started libpod-conmon-0bb7f020b2feb1da86ce96d77a14ce6c3e55e8cc132cea441239bdaa9b52c262.scope. Nov 23 03:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:12:10 localhost podman[68860]: 2025-11-23 08:12:10.736832298 +0000 UTC m=+0.237346496 container init 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, release=1761123044, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:12:10 localhost systemd[1]: Started libpod-conmon-21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.scope. Nov 23 03:12:10 localhost systemd[1]: Started libcrun container. Nov 23 03:12:10 localhost systemd[1]: Started libcrun container. Nov 23 03:12:10 localhost podman[68861]: 2025-11-23 08:12:10.655338636 +0000 UTC m=+0.149225169 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Nov 23 03:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:12:10 localhost podman[68860]: 2025-11-23 08:12:10.759604273 +0000 UTC m=+0.260118481 container start 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:11:48Z) Nov 23 03:12:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a88026e8435ee6e4a9cdaa4ab5e7c8d8b76dc6fc1517ed344c4771e775bf72d/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Nov 23 03:12:10 localhost python3[68704]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=cdd192006d3eee4976a7ad00d48f6c64 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Nov 23 03:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:12:10 localhost podman[68921]: 2025-11-23 08:12:10.773277806 +0000 UTC m=+0.201971891 container init c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:12:10 localhost podman[68869]: 2025-11-23 08:12:10.78342374 +0000 UTC m=+0.274815186 container init 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4) Nov 23 03:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:12:10 localhost podman[68921]: 2025-11-23 08:12:10.806199335 +0000 UTC m=+0.234893420 container start c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.expose-services=, container_name=logrotate_crond, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron) Nov 23 03:12:10 localhost python3[68704]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Nov 23 03:12:10 localhost systemd[1]: libpod-de86cae9b19410386ddac3de8333d9c9db5dce3c16a24aadaa0caa6077d6d840.scope: Deactivated successfully. Nov 23 03:12:10 localhost podman[68869]: 2025-11-23 08:12:10.858999219 +0000 UTC m=+0.350390645 container start 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true) Nov 23 03:12:10 localhost python3[68704]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=cdd192006d3eee4976a7ad00d48f6c64 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Nov 23 03:12:10 localhost podman[68861]: 2025-11-23 08:12:10.866552382 +0000 UTC m=+0.360438895 container init 0bb7f020b2feb1da86ce96d77a14ce6c3e55e8cc132cea441239bdaa9b52c262 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=configure_cms_options, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team) Nov 23 03:12:10 localhost podman[68861]: 2025-11-23 08:12:10.875906152 +0000 UTC m=+0.369792665 container start 0bb7f020b2feb1da86ce96d77a14ce6c3e55e8cc132cea441239bdaa9b52c262 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=configure_cms_options, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:12:10 localhost podman[68861]: 2025-11-23 08:12:10.876109628 +0000 UTC m=+0.369996141 container attach 0bb7f020b2feb1da86ce96d77a14ce6c3e55e8cc132cea441239bdaa9b52c262 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=configure_cms_options, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12) Nov 23 03:12:10 localhost podman[68870]: 2025-11-23 08:12:10.896659394 +0000 UTC m=+0.382307423 container died de86cae9b19410386ddac3de8333d9c9db5dce3c16a24aadaa0caa6077d6d840 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, tcib_managed=true, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, container_name=nova_libvirt_init_secret, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12) Nov 23 03:12:10 localhost systemd[1]: tmp-crun.nAStFv.mount: Deactivated successfully. Nov 23 03:12:10 localhost podman[68997]: 2025-11-23 08:12:10.942008817 +0000 UTC m=+0.138377143 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:12:10 localhost ovs-vsctl[69086]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options Nov 23 03:12:11 localhost podman[68997]: 2025-11-23 08:12:11.016910635 +0000 UTC m=+0.213278911 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, container_name=logrotate_crond, io.buildah.version=1.41.4) Nov 23 03:12:11 localhost systemd[1]: libpod-0bb7f020b2feb1da86ce96d77a14ce6c3e55e8cc132cea441239bdaa9b52c262.scope: Deactivated successfully. Nov 23 03:12:11 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:12:11 localhost podman[69006]: 2025-11-23 08:12:11.043812357 +0000 UTC m=+0.233177906 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044) Nov 23 03:12:11 localhost podman[69037]: 2025-11-23 08:12:11.089613455 +0000 UTC m=+0.232972830 container cleanup de86cae9b19410386ddac3de8333d9c9db5dce3c16a24aadaa0caa6077d6d840 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vcs-type=git, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=nova_libvirt_init_secret, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team) Nov 23 03:12:11 localhost systemd[1]: libpod-conmon-de86cae9b19410386ddac3de8333d9c9db5dce3c16a24aadaa0caa6077d6d840.scope: Deactivated successfully. Nov 23 03:12:11 localhost python3[68704]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=b43218eec4380850a20e0a337fdcf6cf --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack Nov 23 03:12:11 localhost podman[69006]: 2025-11-23 08:12:11.110920174 +0000 UTC m=+0.300285723 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1761123044) Nov 23 03:12:11 localhost podman[69006]: unhealthy Nov 23 03:12:11 localhost podman[68974]: 2025-11-23 08:12:10.865536731 +0000 UTC m=+0.098648174 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:12:11 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:12:11 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Failed with result 'exit-code'. Nov 23 03:12:11 localhost podman[68861]: 2025-11-23 08:12:11.175932147 +0000 UTC m=+0.669818690 container died 0bb7f020b2feb1da86ce96d77a14ce6c3e55e8cc132cea441239bdaa9b52c262 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, container_name=configure_cms_options, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vendor=Red Hat, Inc.) Nov 23 03:12:11 localhost podman[69098]: 2025-11-23 08:12:11.201926211 +0000 UTC m=+0.168463984 container cleanup 0bb7f020b2feb1da86ce96d77a14ce6c3e55e8cc132cea441239bdaa9b52c262 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=configure_cms_options, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public) Nov 23 03:12:11 localhost systemd[1]: libpod-conmon-0bb7f020b2feb1da86ce96d77a14ce6c3e55e8cc132cea441239bdaa9b52c262.scope: Deactivated successfully. Nov 23 03:12:11 localhost python3[68704]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1763883761 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi Nov 23 03:12:11 localhost podman[68974]: 2025-11-23 08:12:11.27815815 +0000 UTC m=+0.511269633 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64) Nov 23 03:12:11 localhost podman[68974]: unhealthy Nov 23 03:12:11 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:12:11 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Failed with result 'exit-code'. Nov 23 03:12:11 localhost podman[69236]: 2025-11-23 08:12:11.431722282 +0000 UTC m=+0.072837815 container create c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, container_name=nova_migration_target, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:12:11 localhost systemd[1]: Started libpod-conmon-c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.scope. Nov 23 03:12:11 localhost systemd[1]: Started libcrun container. Nov 23 03:12:11 localhost podman[69236]: 2025-11-23 08:12:11.399994901 +0000 UTC m=+0.041110464 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 23 03:12:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e303e2a487b3de65e20e02c06253184ba4537ed64f53b2bdbdf3a08756ea60/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 23 03:12:11 localhost podman[69260]: 2025-11-23 08:12:11.512156802 +0000 UTC m=+0.061503885 container create 95bc854e7dd2c0199caf99a32bcf18805cd101eb4423bd2ebb437260acbe7dbb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:12:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:12:11 localhost podman[69236]: 2025-11-23 08:12:11.538357462 +0000 UTC m=+0.179473025 container init c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=nova_migration_target, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com) Nov 23 03:12:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:12:11 localhost podman[69236]: 2025-11-23 08:12:11.567963179 +0000 UTC m=+0.209078732 container start c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:12:11 localhost python3[68704]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=b43218eec4380850a20e0a337fdcf6cf --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 23 03:12:11 localhost podman[69260]: 2025-11-23 08:12:11.485340041 +0000 UTC m=+0.034687154 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Nov 23 03:12:11 localhost systemd[1]: Started libpod-conmon-95bc854e7dd2c0199caf99a32bcf18805cd101eb4423bd2ebb437260acbe7dbb.scope. Nov 23 03:12:11 localhost systemd[1]: Started libcrun container. Nov 23 03:12:11 localhost podman[69284]: 2025-11-23 08:12:11.665137625 +0000 UTC m=+0.092168913 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:12:11 localhost podman[69260]: 2025-11-23 08:12:11.675971291 +0000 UTC m=+0.225318404 container init 95bc854e7dd2c0199caf99a32bcf18805cd101eb4423bd2ebb437260acbe7dbb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=setup_ovs_manager, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Nov 23 03:12:11 localhost podman[69260]: 2025-11-23 08:12:11.74673274 +0000 UTC m=+0.296079813 container start 95bc854e7dd2c0199caf99a32bcf18805cd101eb4423bd2ebb437260acbe7dbb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, container_name=setup_ovs_manager, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1) Nov 23 03:12:11 localhost podman[69260]: 2025-11-23 08:12:11.74801499 +0000 UTC m=+0.297362083 container attach 95bc854e7dd2c0199caf99a32bcf18805cd101eb4423bd2ebb437260acbe7dbb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., container_name=setup_ovs_manager, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:12:11 localhost systemd[1]: tmp-crun.YPJSgd.mount: Deactivated successfully. Nov 23 03:12:11 localhost systemd[1]: var-lib-containers-storage-overlay-8642444f4e7def654fabd6c894b985d447e27b38f6a08db221eca03ebf9926cd-merged.mount: Deactivated successfully. Nov 23 03:12:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0bb7f020b2feb1da86ce96d77a14ce6c3e55e8cc132cea441239bdaa9b52c262-userdata-shm.mount: Deactivated successfully. Nov 23 03:12:11 localhost systemd[1]: var-lib-containers-storage-overlay-d9ec08b5fa040dfb6133696a337502d9c613a89061aa958f367079d48924e617-merged.mount: Deactivated successfully. Nov 23 03:12:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de86cae9b19410386ddac3de8333d9c9db5dce3c16a24aadaa0caa6077d6d840-userdata-shm.mount: Deactivated successfully. Nov 23 03:12:12 localhost podman[69284]: 2025-11-23 08:12:12.005309623 +0000 UTC m=+0.432340871 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_id=tripleo_step4, version=17.1.12, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:12:12 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:12:12 localhost kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure Nov 23 03:12:14 localhost ovs-vsctl[69466]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager Nov 23 03:12:14 localhost systemd[1]: libpod-95bc854e7dd2c0199caf99a32bcf18805cd101eb4423bd2ebb437260acbe7dbb.scope: Deactivated successfully. Nov 23 03:12:14 localhost systemd[1]: libpod-95bc854e7dd2c0199caf99a32bcf18805cd101eb4423bd2ebb437260acbe7dbb.scope: Consumed 2.938s CPU time. Nov 23 03:12:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:12:14 localhost podman[69472]: 2025-11-23 08:12:14.746533533 +0000 UTC m=+0.057856280 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=) Nov 23 03:12:14 localhost podman[69467]: 2025-11-23 08:12:14.783106775 +0000 UTC m=+0.110743347 container died 95bc854e7dd2c0199caf99a32bcf18805cd101eb4423bd2ebb437260acbe7dbb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=setup_ovs_manager, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.4, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:12:14 localhost systemd[1]: tmp-crun.lsMiJx.mount: Deactivated successfully. Nov 23 03:12:14 localhost podman[69467]: 2025-11-23 08:12:14.843226006 +0000 UTC m=+0.170862568 container cleanup 95bc854e7dd2c0199caf99a32bcf18805cd101eb4423bd2ebb437260acbe7dbb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=setup_ovs_manager, build-date=2025-11-19T00:14:25Z, tcib_managed=true) Nov 23 03:12:14 localhost systemd[1]: libpod-conmon-95bc854e7dd2c0199caf99a32bcf18805cd101eb4423bd2ebb437260acbe7dbb.scope: Deactivated successfully. Nov 23 03:12:14 localhost python3[68704]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1763883761 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata Nov 23 03:12:14 localhost podman[69472]: 2025-11-23 08:12:14.94549035 +0000 UTC m=+0.256813137 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4) Nov 23 03:12:14 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:12:15 localhost podman[69611]: 2025-11-23 08:12:15.375944922 +0000 UTC m=+0.106458676 container create e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:12:15 localhost podman[69612]: 2025-11-23 08:12:15.39818433 +0000 UTC m=+0.122508272 container create f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, version=17.1.12, vendor=Red Hat, Inc.) Nov 23 03:12:15 localhost podman[69611]: 2025-11-23 08:12:15.317499033 +0000 UTC m=+0.048012807 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Nov 23 03:12:15 localhost podman[69612]: 2025-11-23 08:12:15.324889452 +0000 UTC m=+0.049213434 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Nov 23 03:12:15 localhost systemd[1]: Started libpod-conmon-e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.scope. Nov 23 03:12:15 localhost systemd[1]: Started libpod-conmon-f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.scope. Nov 23 03:12:15 localhost systemd[1]: Started libcrun container. Nov 23 03:12:15 localhost systemd[1]: Started libcrun container. Nov 23 03:12:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cb65f3d881d4025c031ad58fd79dbe3fe721b3499b4f6bf264330e3666efe6/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Nov 23 03:12:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cb65f3d881d4025c031ad58fd79dbe3fe721b3499b4f6bf264330e3666efe6/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff) Nov 23 03:12:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cb65f3d881d4025c031ad58fd79dbe3fe721b3499b4f6bf264330e3666efe6/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff) Nov 23 03:12:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba8ac6fa9e418015cb2e81f4c2ff1b5d5fce1183c5cd7f7a69873fc39e13080/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 03:12:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba8ac6fa9e418015cb2e81f4c2ff1b5d5fce1183c5cd7f7a69873fc39e13080/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff) Nov 23 03:12:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba8ac6fa9e418015cb2e81f4c2ff1b5d5fce1183c5cd7f7a69873fc39e13080/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 03:12:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:12:15 localhost podman[69612]: 2025-11-23 08:12:15.484724528 +0000 UTC m=+0.209048540 container init f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4) Nov 23 03:12:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:12:15 localhost podman[69612]: 2025-11-23 08:12:15.524625123 +0000 UTC m=+0.248949055 container start f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, version=17.1.12, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Nov 23 03:12:15 localhost python3[68704]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=aab643b40a0a602c64733b2a96099834 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Nov 23 03:12:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:12:15 localhost podman[69611]: 2025-11-23 08:12:15.546342205 +0000 UTC m=+0.276855949 container init e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, release=1761123044, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, architecture=x86_64, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:12:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:12:15 localhost podman[69611]: 2025-11-23 08:12:15.581220094 +0000 UTC m=+0.311733838 container start e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, batch=17.1_20251118.1, container_name=ovn_controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team) Nov 23 03:12:15 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Nov 23 03:12:15 localhost python3[68704]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Nov 23 03:12:15 localhost systemd[1]: Created slice User Slice of UID 0. Nov 23 03:12:15 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Nov 23 03:12:15 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Nov 23 03:12:15 localhost systemd[1]: Starting User Manager for UID 0... Nov 23 03:12:15 localhost podman[69676]: 2025-11-23 08:12:15.719635278 +0000 UTC m=+0.131188141 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:12:15 localhost podman[69654]: 2025-11-23 08:12:15.623208033 +0000 UTC m=+0.092798832 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1) Nov 23 03:12:15 localhost systemd[1]: var-lib-containers-storage-overlay-30826345d318533efcd6d35f8914ab0003e05b6751a7a5bc7b1bbeb3898fc84c-merged.mount: Deactivated successfully. Nov 23 03:12:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-95bc854e7dd2c0199caf99a32bcf18805cd101eb4423bd2ebb437260acbe7dbb-userdata-shm.mount: Deactivated successfully. Nov 23 03:12:15 localhost podman[69654]: 2025-11-23 08:12:15.75331673 +0000 UTC m=+0.222907439 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, container_name=ovn_metadata_agent, architecture=x86_64, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Nov 23 03:12:15 localhost podman[69654]: unhealthy Nov 23 03:12:15 localhost podman[69676]: 2025-11-23 08:12:15.760503122 +0000 UTC m=+0.172055965 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12) Nov 23 03:12:15 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:12:15 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:12:15 localhost podman[69676]: unhealthy Nov 23 03:12:15 localhost systemd[69698]: Queued start job for default target Main User Target. Nov 23 03:12:15 localhost systemd[69698]: Created slice User Application Slice. Nov 23 03:12:15 localhost systemd[69698]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Nov 23 03:12:15 localhost systemd[69698]: Started Daily Cleanup of User's Temporary Directories. Nov 23 03:12:15 localhost systemd[69698]: Reached target Paths. Nov 23 03:12:15 localhost systemd[69698]: Reached target Timers. Nov 23 03:12:15 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:12:15 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:12:15 localhost systemd[69698]: Starting D-Bus User Message Bus Socket... Nov 23 03:12:15 localhost systemd[69698]: Starting Create User's Volatile Files and Directories... Nov 23 03:12:15 localhost systemd[69698]: Listening on D-Bus User Message Bus Socket. Nov 23 03:12:15 localhost systemd[69698]: Finished Create User's Volatile Files and Directories. Nov 23 03:12:15 localhost systemd[69698]: Reached target Sockets. Nov 23 03:12:15 localhost systemd[69698]: Reached target Basic System. Nov 23 03:12:15 localhost systemd[69698]: Reached target Main User Target. Nov 23 03:12:15 localhost systemd[69698]: Startup finished in 140ms. Nov 23 03:12:15 localhost systemd[1]: Started User Manager for UID 0. Nov 23 03:12:15 localhost systemd[1]: Started Session c9 of User root. Nov 23 03:12:15 localhost systemd[1]: session-c9.scope: Deactivated successfully. Nov 23 03:12:15 localhost kernel: device br-int entered promiscuous mode Nov 23 03:12:15 localhost NetworkManager[5966]: [1763885535.9344] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11) Nov 23 03:12:15 localhost systemd-udevd[69766]: Network interface NamePolicy= disabled on kernel command line. Nov 23 03:12:16 localhost python3[69786]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:16 localhost python3[69802]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:16 localhost python3[69818]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:16 localhost kernel: device genev_sys_6081 entered promiscuous mode Nov 23 03:12:16 localhost NetworkManager[5966]: [1763885536.9921] device (genev_sys_6081): carrier: link connected Nov 23 03:12:16 localhost NetworkManager[5966]: [1763885536.9928] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12) Nov 23 03:12:17 localhost python3[69834]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:17 localhost python3[69855]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:17 localhost python3[69873]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:17 localhost python3[69889]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:12:18 localhost python3[69907]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:12:18 localhost python3[69925]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:12:18 localhost python3[69941]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:12:18 localhost python3[69957]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:12:19 localhost python3[69973]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:12:19 localhost python3[70034]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885539.2927735-109634-280796486329543/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:20 localhost python3[70063]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885539.2927735-109634-280796486329543/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:20 localhost python3[70092]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885539.2927735-109634-280796486329543/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:21 localhost python3[70121]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885539.2927735-109634-280796486329543/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:21 localhost python3[70150]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885539.2927735-109634-280796486329543/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:22 localhost python3[70179]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885539.2927735-109634-280796486329543/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:22 localhost python3[70195]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 03:12:22 localhost systemd[1]: Reloading. Nov 23 03:12:22 localhost systemd-sysv-generator[70221]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:12:22 localhost systemd-rc-local-generator[70215]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:12:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:12:23 localhost python3[70246]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:12:23 localhost systemd[1]: Reloading. Nov 23 03:12:24 localhost systemd-sysv-generator[70279]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:12:24 localhost systemd-rc-local-generator[70275]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:12:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:12:24 localhost systemd[1]: Starting ceilometer_agent_compute container... Nov 23 03:12:24 localhost tripleo-start-podman-container[70286]: Creating additional drop-in dependency for "ceilometer_agent_compute" (131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182) Nov 23 03:12:24 localhost systemd[1]: Reloading. Nov 23 03:12:24 localhost systemd-rc-local-generator[70342]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:12:24 localhost systemd-sysv-generator[70347]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:12:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:12:24 localhost systemd[1]: Started ceilometer_agent_compute container. Nov 23 03:12:25 localhost python3[70371]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:12:25 localhost systemd[1]: Reloading. Nov 23 03:12:25 localhost systemd-sysv-generator[70399]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:12:25 localhost systemd-rc-local-generator[70395]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:12:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:12:26 localhost systemd[1]: Stopping User Manager for UID 0... Nov 23 03:12:26 localhost systemd[69698]: Activating special unit Exit the Session... Nov 23 03:12:26 localhost systemd[69698]: Stopped target Main User Target. Nov 23 03:12:26 localhost systemd[69698]: Stopped target Basic System. Nov 23 03:12:26 localhost systemd[69698]: Stopped target Paths. Nov 23 03:12:26 localhost systemd[69698]: Stopped target Sockets. Nov 23 03:12:26 localhost systemd[69698]: Stopped target Timers. Nov 23 03:12:26 localhost systemd[69698]: Stopped Daily Cleanup of User's Temporary Directories. Nov 23 03:12:26 localhost systemd[69698]: Closed D-Bus User Message Bus Socket. Nov 23 03:12:26 localhost systemd[69698]: Stopped Create User's Volatile Files and Directories. Nov 23 03:12:26 localhost systemd[69698]: Removed slice User Application Slice. Nov 23 03:12:26 localhost systemd[69698]: Reached target Shutdown. Nov 23 03:12:26 localhost systemd[69698]: Finished Exit the Session. Nov 23 03:12:26 localhost systemd[69698]: Reached target Exit the Session. Nov 23 03:12:26 localhost systemd[1]: user@0.service: Deactivated successfully. Nov 23 03:12:26 localhost systemd[1]: Stopped User Manager for UID 0. Nov 23 03:12:26 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Nov 23 03:12:26 localhost systemd[1]: Starting ceilometer_agent_ipmi container... Nov 23 03:12:26 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Nov 23 03:12:26 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Nov 23 03:12:26 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Nov 23 03:12:26 localhost systemd[1]: Removed slice User Slice of UID 0. Nov 23 03:12:26 localhost systemd[1]: Started ceilometer_agent_ipmi container. Nov 23 03:12:26 localhost python3[70440]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:12:26 localhost systemd[1]: Reloading. Nov 23 03:12:26 localhost systemd-sysv-generator[70470]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:12:26 localhost systemd-rc-local-generator[70465]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:12:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:12:27 localhost systemd[1]: Starting logrotate_crond container... Nov 23 03:12:27 localhost systemd[1]: Started logrotate_crond container. Nov 23 03:12:27 localhost python3[70507]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:12:28 localhost systemd[1]: Reloading. Nov 23 03:12:28 localhost systemd-sysv-generator[70540]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:12:28 localhost systemd-rc-local-generator[70536]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:12:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:12:28 localhost systemd[1]: Starting nova_migration_target container... Nov 23 03:12:28 localhost systemd[1]: Started nova_migration_target container. Nov 23 03:12:29 localhost python3[70575]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:12:30 localhost systemd[1]: Reloading. Nov 23 03:12:30 localhost systemd-sysv-generator[70608]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:12:30 localhost systemd-rc-local-generator[70602]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:12:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:12:30 localhost systemd[1]: Starting ovn_controller container... Nov 23 03:12:30 localhost tripleo-start-podman-container[70615]: Creating additional drop-in dependency for "ovn_controller" (e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736) Nov 23 03:12:31 localhost systemd[1]: Reloading. Nov 23 03:12:31 localhost systemd-rc-local-generator[70676]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:12:31 localhost systemd-sysv-generator[70679]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:12:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:12:31 localhost systemd[1]: Started ovn_controller container. Nov 23 03:12:31 localhost python3[70700]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:12:33 localhost systemd[1]: Reloading. Nov 23 03:12:33 localhost systemd-rc-local-generator[70724]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:12:33 localhost systemd-sysv-generator[70730]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:12:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:12:33 localhost systemd[1]: Starting ovn_metadata_agent container... Nov 23 03:12:33 localhost systemd[1]: Started ovn_metadata_agent container. Nov 23 03:12:33 localhost python3[70783]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:35 localhost python3[70905]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005532584 step=4 update_config_hash_only=False Nov 23 03:12:36 localhost python3[70921]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:12:36 localhost python3[70937]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True Nov 23 03:12:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:12:37 localhost podman[70938]: 2025-11-23 08:12:37.904029047 +0000 UTC m=+0.090990917 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, container_name=collectd, url=https://www.redhat.com) Nov 23 03:12:37 localhost podman[70938]: 2025-11-23 08:12:37.915520062 +0000 UTC m=+0.102481902 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Nov 23 03:12:37 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:12:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:12:40 localhost podman[70960]: 2025-11-23 08:12:40.904883401 +0000 UTC m=+0.089450039 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044) Nov 23 03:12:40 localhost podman[70960]: 2025-11-23 08:12:40.940472793 +0000 UTC m=+0.125039441 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, container_name=iscsid, config_id=tripleo_step3, vcs-type=git, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 23 03:12:40 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:12:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:12:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:12:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:12:41 localhost systemd[1]: tmp-crun.xMvZ8O.mount: Deactivated successfully. Nov 23 03:12:41 localhost podman[70981]: 2025-11-23 08:12:41.9314724 +0000 UTC m=+0.110791669 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:12:41 localhost podman[70979]: 2025-11-23 08:12:41.889284824 +0000 UTC m=+0.081525283 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, url=https://www.redhat.com, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1761123044) Nov 23 03:12:41 localhost podman[70980]: 2025-11-23 08:12:41.959398694 +0000 UTC m=+0.146274267 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, distribution-scope=public, release=1761123044, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1) Nov 23 03:12:41 localhost podman[70979]: 2025-11-23 08:12:41.979246648 +0000 UTC m=+0.171487057 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 23 03:12:41 localhost podman[70980]: 2025-11-23 08:12:41.99222871 +0000 UTC m=+0.179104293 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z) Nov 23 03:12:41 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:12:42 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:12:42 localhost podman[70981]: 2025-11-23 08:12:42.062814974 +0000 UTC m=+0.242134243 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Nov 23 03:12:42 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:12:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:12:42 localhost podman[71050]: 2025-11-23 08:12:42.191901999 +0000 UTC m=+0.085638361 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12) Nov 23 03:12:42 localhost podman[71050]: 2025-11-23 08:12:42.569961339 +0000 UTC m=+0.463697761 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4) Nov 23 03:12:42 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:12:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:12:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:12:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:12:45 localhost podman[71076]: 2025-11-23 08:12:45.888368503 +0000 UTC m=+0.074720894 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, release=1761123044, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 23 03:12:45 localhost podman[71077]: 2025-11-23 08:12:45.939149204 +0000 UTC m=+0.120181090 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, build-date=2025-11-19T00:14:25Z, version=17.1.12, config_id=tripleo_step4, container_name=ovn_metadata_agent, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4) Nov 23 03:12:46 localhost podman[71075]: 2025-11-23 08:12:46.012660149 +0000 UTC m=+0.198167254 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git) Nov 23 03:12:46 localhost podman[71076]: 2025-11-23 08:12:46.019446789 +0000 UTC m=+0.205799240 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=ovn_controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 23 03:12:46 localhost podman[71077]: 2025-11-23 08:12:46.029465029 +0000 UTC m=+0.210496945 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:12:46 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:12:46 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:12:46 localhost podman[71075]: 2025-11-23 08:12:46.202512034 +0000 UTC m=+0.388019099 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, vcs-type=git, build-date=2025-11-18T22:49:46Z) Nov 23 03:12:46 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:12:46 localhost systemd[1]: tmp-crun.Hh5tow.mount: Deactivated successfully. Nov 23 03:12:54 localhost snmpd[67609]: empty variable list in _query Nov 23 03:12:54 localhost snmpd[67609]: empty variable list in _query Nov 23 03:12:57 localhost podman[71339]: Nov 23 03:12:57 localhost podman[71339]: 2025-11-23 08:12:57.826668699 +0000 UTC m=+0.065631681 container create 37660b063c7f1375d0ff8de96a7626fc663f74081255f53a2223f3a35e07cfd9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_volhard, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, ceph=True, release=553, name=rhceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux ) Nov 23 03:12:57 localhost systemd[1]: Started libpod-conmon-37660b063c7f1375d0ff8de96a7626fc663f74081255f53a2223f3a35e07cfd9.scope. Nov 23 03:12:57 localhost systemd[1]: Started libcrun container. Nov 23 03:12:57 localhost podman[71339]: 2025-11-23 08:12:57.89066923 +0000 UTC m=+0.129632172 container init 37660b063c7f1375d0ff8de96a7626fc663f74081255f53a2223f3a35e07cfd9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_volhard, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_BRANCH=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, RELEASE=main, version=7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.openshift.expose-services=, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc.) Nov 23 03:12:57 localhost podman[71339]: 2025-11-23 08:12:57.792699918 +0000 UTC m=+0.031662880 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 03:12:57 localhost podman[71339]: 2025-11-23 08:12:57.900691021 +0000 UTC m=+0.139653993 container start 37660b063c7f1375d0ff8de96a7626fc663f74081255f53a2223f3a35e07cfd9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_volhard, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, distribution-scope=public, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, version=7, name=rhceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-type=git) Nov 23 03:12:57 localhost podman[71339]: 2025-11-23 08:12:57.90100583 +0000 UTC m=+0.139968772 container attach 37660b063c7f1375d0ff8de96a7626fc663f74081255f53a2223f3a35e07cfd9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_volhard, version=7, RELEASE=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, name=rhceph, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, architecture=x86_64, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container) Nov 23 03:12:57 localhost awesome_volhard[71354]: 167 167 Nov 23 03:12:57 localhost systemd[1]: libpod-37660b063c7f1375d0ff8de96a7626fc663f74081255f53a2223f3a35e07cfd9.scope: Deactivated successfully. Nov 23 03:12:57 localhost podman[71339]: 2025-11-23 08:12:57.904347213 +0000 UTC m=+0.143310175 container died 37660b063c7f1375d0ff8de96a7626fc663f74081255f53a2223f3a35e07cfd9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_volhard, distribution-scope=public, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, release=553, vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 03:12:58 localhost podman[71359]: 2025-11-23 08:12:58.002510021 +0000 UTC m=+0.086346313 container remove 37660b063c7f1375d0ff8de96a7626fc663f74081255f53a2223f3a35e07cfd9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_volhard, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, release=553, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7) Nov 23 03:12:58 localhost systemd[1]: libpod-conmon-37660b063c7f1375d0ff8de96a7626fc663f74081255f53a2223f3a35e07cfd9.scope: Deactivated successfully. Nov 23 03:12:58 localhost podman[71382]: Nov 23 03:12:58 localhost podman[71382]: 2025-11-23 08:12:58.245610975 +0000 UTC m=+0.081837474 container create ad88e0aac5da41e40bd0f241b3960a9be671b45aa33cd77eb5878833ee880574 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_burnell, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, architecture=x86_64, description=Red Hat Ceph Storage 7, version=7, release=553, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 03:12:58 localhost systemd[1]: Started libpod-conmon-ad88e0aac5da41e40bd0f241b3960a9be671b45aa33cd77eb5878833ee880574.scope. Nov 23 03:12:58 localhost systemd[1]: Started libcrun container. Nov 23 03:12:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c260319c036dc43388e036de1276999a6a1882273d53a2b2e53f522dad74d1/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 23 03:12:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c260319c036dc43388e036de1276999a6a1882273d53a2b2e53f522dad74d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 03:12:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c260319c036dc43388e036de1276999a6a1882273d53a2b2e53f522dad74d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 03:12:58 localhost podman[71382]: 2025-11-23 08:12:58.215561254 +0000 UTC m=+0.051787813 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 03:12:58 localhost podman[71382]: 2025-11-23 08:12:58.315482987 +0000 UTC m=+0.151709486 container init ad88e0aac5da41e40bd0f241b3960a9be671b45aa33cd77eb5878833ee880574 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_burnell, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, distribution-scope=public, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, name=rhceph) Nov 23 03:12:58 localhost podman[71382]: 2025-11-23 08:12:58.326037584 +0000 UTC m=+0.162264093 container start ad88e0aac5da41e40bd0f241b3960a9be671b45aa33cd77eb5878833ee880574 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_burnell, distribution-scope=public, ceph=True, vendor=Red Hat, Inc., version=7, name=rhceph, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-type=git) Nov 23 03:12:58 localhost podman[71382]: 2025-11-23 08:12:58.326298472 +0000 UTC m=+0.162525021 container attach ad88e0aac5da41e40bd0f241b3960a9be671b45aa33cd77eb5878833ee880574 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_burnell, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-type=git, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64) Nov 23 03:12:58 localhost systemd[1]: var-lib-containers-storage-overlay-46a2fa082c8503bca4239801578c320db8412206801c5d22234d64ebf7200551-merged.mount: Deactivated successfully. Nov 23 03:12:59 localhost happy_burnell[71398]: [ Nov 23 03:12:59 localhost happy_burnell[71398]: { Nov 23 03:12:59 localhost happy_burnell[71398]: "available": false, Nov 23 03:12:59 localhost happy_burnell[71398]: "ceph_device": false, Nov 23 03:12:59 localhost happy_burnell[71398]: "device_id": "QEMU_DVD-ROM_QM00001", Nov 23 03:12:59 localhost happy_burnell[71398]: "lsm_data": {}, Nov 23 03:12:59 localhost happy_burnell[71398]: "lvs": [], Nov 23 03:12:59 localhost happy_burnell[71398]: "path": "/dev/sr0", Nov 23 03:12:59 localhost happy_burnell[71398]: "rejected_reasons": [ Nov 23 03:12:59 localhost happy_burnell[71398]: "Insufficient space (<5GB)", Nov 23 03:12:59 localhost happy_burnell[71398]: "Has a FileSystem" Nov 23 03:12:59 localhost happy_burnell[71398]: ], Nov 23 03:12:59 localhost happy_burnell[71398]: "sys_api": { Nov 23 03:12:59 localhost happy_burnell[71398]: "actuators": null, Nov 23 03:12:59 localhost happy_burnell[71398]: "device_nodes": "sr0", Nov 23 03:12:59 localhost happy_burnell[71398]: "human_readable_size": "482.00 KB", Nov 23 03:12:59 localhost happy_burnell[71398]: "id_bus": "ata", Nov 23 03:12:59 localhost happy_burnell[71398]: "model": "QEMU DVD-ROM", Nov 23 03:12:59 localhost happy_burnell[71398]: "nr_requests": "2", Nov 23 03:12:59 localhost happy_burnell[71398]: "partitions": {}, Nov 23 03:12:59 localhost happy_burnell[71398]: "path": "/dev/sr0", Nov 23 03:12:59 localhost happy_burnell[71398]: "removable": "1", Nov 23 03:12:59 localhost happy_burnell[71398]: "rev": "2.5+", Nov 23 03:12:59 localhost happy_burnell[71398]: "ro": "0", Nov 23 03:12:59 localhost happy_burnell[71398]: "rotational": "1", Nov 23 03:12:59 localhost happy_burnell[71398]: "sas_address": "", Nov 23 03:12:59 localhost happy_burnell[71398]: "sas_device_handle": "", Nov 23 03:12:59 localhost happy_burnell[71398]: "scheduler_mode": "mq-deadline", Nov 23 03:12:59 localhost happy_burnell[71398]: "sectors": 0, Nov 23 03:12:59 localhost happy_burnell[71398]: "sectorsize": "2048", Nov 23 03:12:59 localhost happy_burnell[71398]: "size": 493568.0, Nov 23 03:12:59 localhost happy_burnell[71398]: "support_discard": "0", Nov 23 03:12:59 localhost happy_burnell[71398]: "type": "disk", Nov 23 03:12:59 localhost happy_burnell[71398]: "vendor": "QEMU" Nov 23 03:12:59 localhost happy_burnell[71398]: } Nov 23 03:12:59 localhost happy_burnell[71398]: } Nov 23 03:12:59 localhost happy_burnell[71398]: ] Nov 23 03:12:59 localhost systemd[1]: libpod-ad88e0aac5da41e40bd0f241b3960a9be671b45aa33cd77eb5878833ee880574.scope: Deactivated successfully. Nov 23 03:12:59 localhost systemd[1]: libpod-ad88e0aac5da41e40bd0f241b3960a9be671b45aa33cd77eb5878833ee880574.scope: Consumed 1.002s CPU time. Nov 23 03:12:59 localhost podman[71382]: 2025-11-23 08:12:59.301829181 +0000 UTC m=+1.138055690 container died ad88e0aac5da41e40bd0f241b3960a9be671b45aa33cd77eb5878833ee880574 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_burnell, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main) Nov 23 03:12:59 localhost systemd[1]: var-lib-containers-storage-overlay-e6c260319c036dc43388e036de1276999a6a1882273d53a2b2e53f522dad74d1-merged.mount: Deactivated successfully. Nov 23 03:12:59 localhost podman[73159]: 2025-11-23 08:12:59.391240198 +0000 UTC m=+0.077855991 container remove ad88e0aac5da41e40bd0f241b3960a9be671b45aa33cd77eb5878833ee880574 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_burnell, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vcs-type=git, CEPH_POINT_RELEASE=, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_BRANCH=main, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, architecture=x86_64) Nov 23 03:12:59 localhost systemd[1]: libpod-conmon-ad88e0aac5da41e40bd0f241b3960a9be671b45aa33cd77eb5878833ee880574.scope: Deactivated successfully. Nov 23 03:13:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:13:08 localhost systemd[1]: tmp-crun.RXVLJk.mount: Deactivated successfully. Nov 23 03:13:08 localhost podman[73186]: 2025-11-23 08:13:08.928822891 +0000 UTC m=+0.109148558 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, config_id=tripleo_step3, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container) Nov 23 03:13:08 localhost podman[73186]: 2025-11-23 08:13:08.939181412 +0000 UTC m=+0.119507059 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd) Nov 23 03:13:08 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:13:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:13:11 localhost podman[73206]: 2025-11-23 08:13:11.888213083 +0000 UTC m=+0.073742263 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-type=git, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:13:11 localhost podman[73206]: 2025-11-23 08:13:11.920057599 +0000 UTC m=+0.105586779 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, release=1761123044, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64) Nov 23 03:13:11 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:13:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:13:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:13:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:13:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:13:12 localhost systemd[1]: tmp-crun.7BsCQ4.mount: Deactivated successfully. Nov 23 03:13:12 localhost podman[73226]: 2025-11-23 08:13:12.907050792 +0000 UTC m=+0.092119591 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible) Nov 23 03:13:12 localhost podman[73226]: 2025-11-23 08:13:12.940019842 +0000 UTC m=+0.125088661 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:13:12 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:13:12 localhost podman[73225]: 2025-11-23 08:13:12.967488633 +0000 UTC m=+0.155452092 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, distribution-scope=public, container_name=ceilometer_agent_compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:13:13 localhost podman[73228]: 2025-11-23 08:13:13.007937755 +0000 UTC m=+0.187700080 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:13:13 localhost podman[73225]: 2025-11-23 08:13:13.023710882 +0000 UTC m=+0.211674321 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, tcib_managed=true, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:13:13 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:13:13 localhost podman[73228]: 2025-11-23 08:13:13.045574549 +0000 UTC m=+0.225336834 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, container_name=logrotate_crond, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1) Nov 23 03:13:13 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:13:13 localhost podman[73227]: 2025-11-23 08:13:13.102224932 +0000 UTC m=+0.283848655 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4) Nov 23 03:13:13 localhost podman[73227]: 2025-11-23 08:13:13.464946588 +0000 UTC m=+0.646570261 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git) Nov 23 03:13:13 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:13:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:13:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:13:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:13:16 localhost podman[73323]: 2025-11-23 08:13:16.901751573 +0000 UTC m=+0.088166189 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:13:16 localhost systemd[1]: tmp-crun.xkdegs.mount: Deactivated successfully. Nov 23 03:13:16 localhost podman[73324]: 2025-11-23 08:13:16.976565789 +0000 UTC m=+0.158178236 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-ovn-controller, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, distribution-scope=public, tcib_managed=true, container_name=ovn_controller, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1) Nov 23 03:13:17 localhost podman[73324]: 2025-11-23 08:13:17.026443833 +0000 UTC m=+0.208056300 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, version=17.1.12) Nov 23 03:13:17 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:13:17 localhost podman[73325]: 2025-11-23 08:13:17.115515339 +0000 UTC m=+0.295266419 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn) Nov 23 03:13:17 localhost podman[73323]: 2025-11-23 08:13:17.14820085 +0000 UTC m=+0.334615456 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., container_name=metrics_qdr, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:13:17 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:13:17 localhost podman[73325]: 2025-11-23 08:13:17.160729898 +0000 UTC m=+0.340480998 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:13:17 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:13:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:13:39 localhost podman[73393]: 2025-11-23 08:13:39.897077234 +0000 UTC m=+0.083270758 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:13:39 localhost podman[73393]: 2025-11-23 08:13:39.905934988 +0000 UTC m=+0.092128472 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, config_id=tripleo_step3, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044) Nov 23 03:13:39 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:13:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:13:42 localhost podman[73414]: 2025-11-23 08:13:42.892226773 +0000 UTC m=+0.078885222 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, version=17.1.12) Nov 23 03:13:42 localhost podman[73414]: 2025-11-23 08:13:42.90085855 +0000 UTC m=+0.087516999 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, distribution-scope=public, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3) Nov 23 03:13:42 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:13:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:13:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:13:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:13:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:13:43 localhost podman[73434]: 2025-11-23 08:13:43.907676228 +0000 UTC m=+0.088476080 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi) Nov 23 03:13:43 localhost podman[73434]: 2025-11-23 08:13:43.936171469 +0000 UTC m=+0.116971301 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=) Nov 23 03:13:43 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:13:43 localhost podman[73433]: 2025-11-23 08:13:43.950842303 +0000 UTC m=+0.135072631 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:13:43 localhost podman[73433]: 2025-11-23 08:13:43.999466177 +0000 UTC m=+0.183696565 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:13:44 localhost podman[73435]: 2025-11-23 08:13:44.010815019 +0000 UTC m=+0.190763015 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step4) Nov 23 03:13:44 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:13:44 localhost podman[73436]: 2025-11-23 08:13:44.06063002 +0000 UTC m=+0.236469488 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, version=17.1.12, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, distribution-scope=public) Nov 23 03:13:44 localhost podman[73436]: 2025-11-23 08:13:44.094267301 +0000 UTC m=+0.270106779 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, url=https://www.redhat.com) Nov 23 03:13:44 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:13:44 localhost podman[73435]: 2025-11-23 08:13:44.384372109 +0000 UTC m=+0.564320085 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:13:44 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:13:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:13:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:13:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:13:47 localhost podman[73526]: 2025-11-23 08:13:47.899602368 +0000 UTC m=+0.083820467 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, release=1761123044, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, batch=17.1_20251118.1) Nov 23 03:13:47 localhost systemd[1]: tmp-crun.Er5Zun.mount: Deactivated successfully. Nov 23 03:13:47 localhost podman[73527]: 2025-11-23 08:13:47.952136925 +0000 UTC m=+0.132193027 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1761123044, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Nov 23 03:13:48 localhost podman[73528]: 2025-11-23 08:13:48.000729992 +0000 UTC m=+0.178064140 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, vcs-type=git, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Nov 23 03:13:48 localhost podman[73527]: 2025-11-23 08:13:48.023414857 +0000 UTC m=+0.203470949 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, version=17.1.12, release=1761123044, architecture=x86_64, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc.) Nov 23 03:13:48 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:13:48 localhost podman[73528]: 2025-11-23 08:13:48.068430024 +0000 UTC m=+0.245764152 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn) Nov 23 03:13:48 localhost podman[73526]: 2025-11-23 08:13:48.071541249 +0000 UTC m=+0.255759338 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:13:48 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:13:48 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:14:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:14:10 localhost systemd[1]: tmp-crun.d198Lv.mount: Deactivated successfully. Nov 23 03:14:10 localhost podman[73675]: 2025-11-23 08:14:10.90895414 +0000 UTC m=+0.090266613 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, container_name=collectd, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-collectd, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc.) Nov 23 03:14:10 localhost podman[73675]: 2025-11-23 08:14:10.945807748 +0000 UTC m=+0.127120211 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible) Nov 23 03:14:10 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:14:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:14:13 localhost podman[73696]: 2025-11-23 08:14:13.893825688 +0000 UTC m=+0.080792353 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, container_name=iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Nov 23 03:14:13 localhost podman[73696]: 2025-11-23 08:14:13.931519001 +0000 UTC m=+0.118485656 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, version=17.1.12, container_name=iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4) Nov 23 03:14:13 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:14:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:14:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:14:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:14:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:14:14 localhost podman[73718]: 2025-11-23 08:14:14.905724045 +0000 UTC m=+0.079093641 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=logrotate_crond, release=1761123044, vcs-type=git, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vendor=Red Hat, Inc.) Nov 23 03:14:14 localhost podman[73718]: 2025-11-23 08:14:14.918334671 +0000 UTC m=+0.091704287 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4) Nov 23 03:14:14 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:14:14 localhost podman[73717]: 2025-11-23 08:14:14.957352425 +0000 UTC m=+0.130683270 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, release=1761123044, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, container_name=nova_migration_target) Nov 23 03:14:15 localhost podman[73715]: 2025-11-23 08:14:15.071022634 +0000 UTC m=+0.248970541 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=ceilometer_agent_compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, build-date=2025-11-19T00:11:48Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 23 03:14:15 localhost podman[73716]: 2025-11-23 08:14:15.033729243 +0000 UTC m=+0.208772451 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Nov 23 03:14:15 localhost podman[73715]: 2025-11-23 08:14:15.101382213 +0000 UTC m=+0.279330120 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Nov 23 03:14:15 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:14:15 localhost podman[73716]: 2025-11-23 08:14:15.117524076 +0000 UTC m=+0.292567284 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:14:15 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:14:15 localhost podman[73717]: 2025-11-23 08:14:15.342360448 +0000 UTC m=+0.515691343 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Nov 23 03:14:15 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:14:15 localhost systemd[1]: tmp-crun.QZiszm.mount: Deactivated successfully. Nov 23 03:14:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:14:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:14:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:14:18 localhost podman[73808]: 2025-11-23 08:14:18.896243817 +0000 UTC m=+0.082630269 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step1, architecture=x86_64, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Nov 23 03:14:18 localhost podman[73810]: 2025-11-23 08:14:18.95021866 +0000 UTC m=+0.130495725 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, build-date=2025-11-19T00:14:25Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Nov 23 03:14:18 localhost podman[73810]: 2025-11-23 08:14:18.997397523 +0000 UTC m=+0.177674618 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, distribution-scope=public) Nov 23 03:14:19 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:14:19 localhost podman[73809]: 2025-11-23 08:14:19.015379204 +0000 UTC m=+0.198173186 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 23 03:14:19 localhost podman[73809]: 2025-11-23 08:14:19.062429684 +0000 UTC m=+0.245223636 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 23 03:14:19 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:14:19 localhost podman[73808]: 2025-11-23 08:14:19.082386915 +0000 UTC m=+0.268773357 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-type=git) Nov 23 03:14:19 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:14:19 localhost systemd[1]: tmp-crun.RoiTtz.mount: Deactivated successfully. Nov 23 03:14:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:14:41 localhost podman[73882]: 2025-11-23 08:14:41.895476761 +0000 UTC m=+0.082742793 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3) Nov 23 03:14:41 localhost podman[73882]: 2025-11-23 08:14:41.933461994 +0000 UTC m=+0.120728056 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team) Nov 23 03:14:41 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:14:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:14:44 localhost podman[73902]: 2025-11-23 08:14:44.887449805 +0000 UTC m=+0.074497871 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, name=rhosp17/openstack-iscsid, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container) Nov 23 03:14:44 localhost podman[73902]: 2025-11-23 08:14:44.899284807 +0000 UTC m=+0.086332883 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Nov 23 03:14:44 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:14:45 localhost podman[73922]: 2025-11-23 08:14:45.903073276 +0000 UTC m=+0.086546979 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:14:45 localhost systemd[1]: tmp-crun.GnPNXz.mount: Deactivated successfully. Nov 23 03:14:45 localhost podman[73923]: 2025-11-23 08:14:45.961459173 +0000 UTC m=+0.144148103 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container) Nov 23 03:14:46 localhost podman[73922]: 2025-11-23 08:14:46.012791394 +0000 UTC m=+0.196265127 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, url=https://www.redhat.com) Nov 23 03:14:46 localhost podman[73923]: 2025-11-23 08:14:46.022561952 +0000 UTC m=+0.205250902 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 23 03:14:46 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:14:46 localhost podman[73924]: 2025-11-23 08:14:46.026325468 +0000 UTC m=+0.200817117 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044) Nov 23 03:14:46 localhost podman[73926]: 2025-11-23 08:14:46.076405201 +0000 UTC m=+0.248258779 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Nov 23 03:14:46 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:14:46 localhost podman[73926]: 2025-11-23 08:14:46.156329766 +0000 UTC m=+0.328183344 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-cron, tcib_managed=true, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, vcs-type=git, architecture=x86_64, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container) Nov 23 03:14:46 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:14:46 localhost podman[73924]: 2025-11-23 08:14:46.392493114 +0000 UTC m=+0.566984733 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target) Nov 23 03:14:46 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:14:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:14:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:14:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:14:49 localhost systemd[1]: tmp-crun.cFYbvP.mount: Deactivated successfully. Nov 23 03:14:49 localhost podman[74020]: 2025-11-23 08:14:49.900685777 +0000 UTC m=+0.089201800 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:14:49 localhost systemd[1]: tmp-crun.7u1UfT.mount: Deactivated successfully. Nov 23 03:14:49 localhost podman[74022]: 2025-11-23 08:14:49.960225709 +0000 UTC m=+0.144165142 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Nov 23 03:14:50 localhost podman[74021]: 2025-11-23 08:14:50.008501216 +0000 UTC m=+0.194102130 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:14:50 localhost podman[74022]: 2025-11-23 08:14:50.061754396 +0000 UTC m=+0.245693879 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, architecture=x86_64, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:14:50 localhost podman[74021]: 2025-11-23 08:14:50.064569032 +0000 UTC m=+0.250169926 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 23 03:14:50 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:14:50 localhost podman[74020]: 2025-11-23 08:14:50.113406307 +0000 UTC m=+0.301922280 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, container_name=metrics_qdr, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:14:50 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:14:50 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:15:01 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:15:01 localhost recover_tripleo_nova_virtqemud[74098]: 62093 Nov 23 03:15:01 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:15:01 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:15:02 localhost podman[74199]: 2025-11-23 08:15:02.843284312 +0000 UTC m=+0.094696259 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, release=553, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux ) Nov 23 03:15:02 localhost podman[74199]: 2025-11-23 08:15:02.947479791 +0000 UTC m=+0.198891728 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.openshift.expose-services=, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, GIT_CLEAN=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 23 03:15:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:15:12 localhost podman[74342]: 2025-11-23 08:15:12.902621221 +0000 UTC m=+0.085718105 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, release=1761123044, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:15:12 localhost podman[74342]: 2025-11-23 08:15:12.9182902 +0000 UTC m=+0.101387074 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, io.buildah.version=1.41.4) Nov 23 03:15:12 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:15:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:15:15 localhost systemd[1]: tmp-crun.hGF8du.mount: Deactivated successfully. Nov 23 03:15:15 localhost podman[74410]: 2025-11-23 08:15:15.404661511 +0000 UTC m=+0.084926190 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, version=17.1.12, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team) Nov 23 03:15:15 localhost podman[74410]: 2025-11-23 08:15:15.41836618 +0000 UTC m=+0.098630849 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, version=17.1.12, release=1761123044, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 23 03:15:15 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:15:15 localhost python3[74409]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:15:15 localhost python3[74472]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885715.1731164-113718-12673332390694/source _original_basename=tmp8ivy78pq follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:15:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:15:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:15:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:15:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:15:16 localhost podman[74504]: 2025-11-23 08:15:16.728609748 +0000 UTC m=+0.075358477 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=) Nov 23 03:15:16 localhost python3[74503]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:15:16 localhost podman[74504]: 2025-11-23 08:15:16.787441908 +0000 UTC m=+0.134190577 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, architecture=x86_64, batch=17.1_20251118.1) Nov 23 03:15:16 localhost systemd[1]: tmp-crun.lTT4Wu.mount: Deactivated successfully. Nov 23 03:15:16 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:15:16 localhost podman[74502]: 2025-11-23 08:15:16.835386816 +0000 UTC m=+0.182728814 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:15:16 localhost podman[74511]: 2025-11-23 08:15:16.791133862 +0000 UTC m=+0.130733483 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-cron, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:15:16 localhost podman[74511]: 2025-11-23 08:15:16.876343219 +0000 UTC m=+0.215942800 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, container_name=logrotate_crond, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Nov 23 03:15:16 localhost podman[74502]: 2025-11-23 08:15:16.888398307 +0000 UTC m=+0.235740345 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, release=1761123044, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:15:16 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:15:16 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:15:16 localhost podman[74505]: 2025-11-23 08:15:16.881508527 +0000 UTC m=+0.222181750 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Nov 23 03:15:17 localhost podman[74505]: 2025-11-23 08:15:17.258363271 +0000 UTC m=+0.599036434 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-19T00:36:58Z, release=1761123044, container_name=nova_migration_target, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git) Nov 23 03:15:17 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:15:18 localhost ansible-async_wrapper.py[74768]: Invoked with 513451918282 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885717.8733544-113850-278149624419331/AnsiballZ_command.py _ Nov 23 03:15:18 localhost ansible-async_wrapper.py[74771]: Starting module and watcher Nov 23 03:15:18 localhost ansible-async_wrapper.py[74771]: Start watching 74772 (3600) Nov 23 03:15:18 localhost ansible-async_wrapper.py[74772]: Start module (74772) Nov 23 03:15:18 localhost ansible-async_wrapper.py[74768]: Return async_wrapper task started. Nov 23 03:15:18 localhost python3[74790]: ansible-ansible.legacy.async_status Invoked with jid=513451918282.74768 mode=status _async_dir=/tmp/.ansible_async Nov 23 03:15:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:15:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:15:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:15:20 localhost podman[74839]: 2025-11-23 08:15:20.921109472 +0000 UTC m=+0.094883595 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, container_name=metrics_qdr, config_id=tripleo_step1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:15:21 localhost podman[74841]: 2025-11-23 08:15:21.022973349 +0000 UTC m=+0.191963885 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Nov 23 03:15:21 localhost podman[74840]: 2025-11-23 08:15:20.991755714 +0000 UTC m=+0.163799493 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=ovn_controller, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible) Nov 23 03:15:21 localhost podman[74840]: 2025-11-23 08:15:21.074406094 +0000 UTC m=+0.246449873 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Nov 23 03:15:21 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:15:21 localhost podman[74841]: 2025-11-23 08:15:21.094341154 +0000 UTC m=+0.263331690 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=ovn_metadata_agent) Nov 23 03:15:21 localhost podman[74839]: 2025-11-23 08:15:21.104320239 +0000 UTC m=+0.278094332 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=) Nov 23 03:15:21 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:15:21 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:15:22 localhost puppet-user[74792]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 23 03:15:22 localhost puppet-user[74792]: (file: /etc/puppet/hiera.yaml) Nov 23 03:15:22 localhost puppet-user[74792]: Warning: Undefined variable '::deploy_config_name'; Nov 23 03:15:22 localhost puppet-user[74792]: (file & line not available) Nov 23 03:15:22 localhost puppet-user[74792]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 23 03:15:22 localhost puppet-user[74792]: (file & line not available) Nov 23 03:15:22 localhost puppet-user[74792]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Nov 23 03:15:22 localhost puppet-user[74792]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 23 03:15:22 localhost puppet-user[74792]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 23 03:15:22 localhost puppet-user[74792]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 23 03:15:22 localhost puppet-user[74792]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 23 03:15:22 localhost puppet-user[74792]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 23 03:15:22 localhost puppet-user[74792]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 23 03:15:22 localhost puppet-user[74792]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 23 03:15:22 localhost puppet-user[74792]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 23 03:15:22 localhost puppet-user[74792]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 23 03:15:22 localhost puppet-user[74792]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 23 03:15:22 localhost puppet-user[74792]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 23 03:15:22 localhost puppet-user[74792]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 23 03:15:22 localhost puppet-user[74792]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 23 03:15:22 localhost puppet-user[74792]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 23 03:15:22 localhost puppet-user[74792]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 23 03:15:22 localhost puppet-user[74792]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 23 03:15:22 localhost puppet-user[74792]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 23 03:15:22 localhost puppet-user[74792]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Nov 23 03:15:22 localhost puppet-user[74792]: Notice: Compiled catalog for np0005532584.localdomain in environment production in 0.20 seconds Nov 23 03:15:22 localhost puppet-user[74792]: Notice: Applied catalog in 0.32 seconds Nov 23 03:15:22 localhost puppet-user[74792]: Application: Nov 23 03:15:22 localhost puppet-user[74792]: Initial environment: production Nov 23 03:15:22 localhost puppet-user[74792]: Converged environment: production Nov 23 03:15:22 localhost puppet-user[74792]: Run mode: user Nov 23 03:15:22 localhost puppet-user[74792]: Changes: Nov 23 03:15:22 localhost puppet-user[74792]: Events: Nov 23 03:15:22 localhost puppet-user[74792]: Resources: Nov 23 03:15:22 localhost puppet-user[74792]: Total: 19 Nov 23 03:15:22 localhost puppet-user[74792]: Time: Nov 23 03:15:22 localhost puppet-user[74792]: Filebucket: 0.00 Nov 23 03:15:22 localhost puppet-user[74792]: Package: 0.00 Nov 23 03:15:22 localhost puppet-user[74792]: Schedule: 0.00 Nov 23 03:15:22 localhost puppet-user[74792]: Augeas: 0.01 Nov 23 03:15:22 localhost puppet-user[74792]: Exec: 0.01 Nov 23 03:15:22 localhost puppet-user[74792]: File: 0.02 Nov 23 03:15:22 localhost puppet-user[74792]: Service: 0.08 Nov 23 03:15:22 localhost puppet-user[74792]: Config retrieval: 0.26 Nov 23 03:15:22 localhost puppet-user[74792]: Transaction evaluation: 0.31 Nov 23 03:15:22 localhost puppet-user[74792]: Catalog application: 0.32 Nov 23 03:15:22 localhost puppet-user[74792]: Last run: 1763885722 Nov 23 03:15:22 localhost puppet-user[74792]: Total: 0.33 Nov 23 03:15:22 localhost puppet-user[74792]: Version: Nov 23 03:15:22 localhost puppet-user[74792]: Config: 1763885722 Nov 23 03:15:22 localhost puppet-user[74792]: Puppet: 7.10.0 Nov 23 03:15:22 localhost ansible-async_wrapper.py[74772]: Module complete (74772) Nov 23 03:15:23 localhost ansible-async_wrapper.py[74771]: Done in kid B. Nov 23 03:15:28 localhost python3[75004]: ansible-ansible.legacy.async_status Invoked with jid=513451918282.74768 mode=status _async_dir=/tmp/.ansible_async Nov 23 03:15:29 localhost python3[75020]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 23 03:15:29 localhost python3[75036]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:15:30 localhost python3[75086]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:15:30 localhost python3[75104]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpchjzzuof recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 23 03:15:31 localhost python3[75134]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:15:32 localhost python3[75239]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Nov 23 03:15:32 localhost python3[75258]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:15:33 localhost python3[75290]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:15:34 localhost python3[75340]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:15:34 localhost python3[75358]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:15:35 localhost python3[75420]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:15:35 localhost python3[75438]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:15:35 localhost python3[75500]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:15:36 localhost python3[75518]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:15:36 localhost python3[75580]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:15:37 localhost python3[75598]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:15:37 localhost python3[75628]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:15:37 localhost systemd[1]: Reloading. Nov 23 03:15:37 localhost systemd-rc-local-generator[75653]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:15:37 localhost systemd-sysv-generator[75658]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:15:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:15:38 localhost python3[75714]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:15:38 localhost python3[75732]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:15:39 localhost python3[75794]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 23 03:15:39 localhost python3[75812]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:15:40 localhost python3[75842]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:15:40 localhost systemd[1]: Reloading. Nov 23 03:15:40 localhost systemd-sysv-generator[75868]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:15:40 localhost systemd-rc-local-generator[75865]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:15:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:15:40 localhost systemd[1]: Starting Create netns directory... Nov 23 03:15:40 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 23 03:15:40 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 23 03:15:40 localhost systemd[1]: Finished Create netns directory. Nov 23 03:15:41 localhost python3[75900]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Nov 23 03:15:42 localhost python3[75959]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Nov 23 03:15:43 localhost podman[75997]: 2025-11-23 08:15:43.054018402 +0000 UTC m=+0.097845286 container create bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, version=17.1.12, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Nov 23 03:15:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:15:43 localhost podman[75997]: 2025-11-23 08:15:43.00301807 +0000 UTC m=+0.046844994 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 23 03:15:43 localhost systemd[1]: Started libpod-conmon-bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.scope. Nov 23 03:15:43 localhost systemd[1]: Started libcrun container. Nov 23 03:15:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4c91e21a4f422a0f3d35f00c131b332e5afd08cf0cad9281d59f1b3acbd4cf/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 03:15:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4c91e21a4f422a0f3d35f00c131b332e5afd08cf0cad9281d59f1b3acbd4cf/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Nov 23 03:15:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4c91e21a4f422a0f3d35f00c131b332e5afd08cf0cad9281d59f1b3acbd4cf/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Nov 23 03:15:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4c91e21a4f422a0f3d35f00c131b332e5afd08cf0cad9281d59f1b3acbd4cf/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 23 03:15:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4c91e21a4f422a0f3d35f00c131b332e5afd08cf0cad9281d59f1b3acbd4cf/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Nov 23 03:15:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:15:43 localhost podman[75997]: 2025-11-23 08:15:43.156625281 +0000 UTC m=+0.200452165 container init bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step5, container_name=nova_compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:15:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:15:43 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Nov 23 03:15:43 localhost podman[75997]: 2025-11-23 08:15:43.1892631 +0000 UTC m=+0.233089954 container start bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, container_name=nova_compute, distribution-scope=public, config_id=tripleo_step5, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z) Nov 23 03:15:43 localhost python3[75959]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 23 03:15:43 localhost systemd[1]: Created slice User Slice of UID 0. Nov 23 03:15:43 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Nov 23 03:15:43 localhost podman[76012]: 2025-11-23 08:15:43.217239577 +0000 UTC m=+0.096660050 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:15:43 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Nov 23 03:15:43 localhost systemd[1]: Starting User Manager for UID 0... Nov 23 03:15:43 localhost podman[76012]: 2025-11-23 08:15:43.267528256 +0000 UTC m=+0.146948729 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, config_id=tripleo_step3, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:15:43 localhost podman[76030]: 2025-11-23 08:15:43.282331588 +0000 UTC m=+0.082270638 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, distribution-scope=public, config_id=tripleo_step5, version=17.1.12, container_name=nova_compute, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible) Nov 23 03:15:43 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:15:43 localhost podman[76030]: 2025-11-23 08:15:43.337314691 +0000 UTC m=+0.137253801 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, container_name=nova_compute, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_step5, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git) Nov 23 03:15:43 localhost podman[76030]: unhealthy Nov 23 03:15:43 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:15:43 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed with result 'exit-code'. Nov 23 03:15:43 localhost systemd[76049]: Queued start job for default target Main User Target. Nov 23 03:15:43 localhost systemd[76049]: Created slice User Application Slice. Nov 23 03:15:43 localhost systemd[76049]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Nov 23 03:15:43 localhost systemd[76049]: Started Daily Cleanup of User's Temporary Directories. Nov 23 03:15:43 localhost systemd[76049]: Reached target Paths. Nov 23 03:15:43 localhost systemd[76049]: Reached target Timers. Nov 23 03:15:43 localhost systemd[76049]: Starting D-Bus User Message Bus Socket... Nov 23 03:15:43 localhost systemd[76049]: Starting Create User's Volatile Files and Directories... Nov 23 03:15:43 localhost systemd[76049]: Listening on D-Bus User Message Bus Socket. Nov 23 03:15:43 localhost systemd[76049]: Finished Create User's Volatile Files and Directories. Nov 23 03:15:43 localhost systemd[76049]: Reached target Sockets. Nov 23 03:15:43 localhost systemd[76049]: Reached target Basic System. Nov 23 03:15:43 localhost systemd[76049]: Reached target Main User Target. Nov 23 03:15:43 localhost systemd[76049]: Startup finished in 143ms. Nov 23 03:15:43 localhost systemd[1]: Started User Manager for UID 0. Nov 23 03:15:43 localhost systemd[1]: Started Session c10 of User root. Nov 23 03:15:43 localhost systemd[1]: session-c10.scope: Deactivated successfully. Nov 23 03:15:43 localhost podman[76140]: 2025-11-23 08:15:43.711949287 +0000 UTC m=+0.090581804 container create a69494522d130f7e7b7298cc9b14e2277101493633e5dc7c00e56de494972548 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vcs-type=git, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, io.openshift.expose-services=, url=https://www.redhat.com, container_name=nova_wait_for_compute_service, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:15:43 localhost systemd[1]: Started libpod-conmon-a69494522d130f7e7b7298cc9b14e2277101493633e5dc7c00e56de494972548.scope. Nov 23 03:15:43 localhost systemd[1]: Started libcrun container. Nov 23 03:15:43 localhost podman[76140]: 2025-11-23 08:15:43.667055183 +0000 UTC m=+0.045687730 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 23 03:15:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622ea5b2f6fbe5d9b292df85d50e445712f85ed6230930160a21086a3d12c064/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Nov 23 03:15:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622ea5b2f6fbe5d9b292df85d50e445712f85ed6230930160a21086a3d12c064/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Nov 23 03:15:43 localhost podman[76140]: 2025-11-23 08:15:43.777851393 +0000 UTC m=+0.156483900 container init a69494522d130f7e7b7298cc9b14e2277101493633e5dc7c00e56de494972548 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, container_name=nova_wait_for_compute_service, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:15:43 localhost podman[76140]: 2025-11-23 08:15:43.787592072 +0000 UTC m=+0.166224599 container start a69494522d130f7e7b7298cc9b14e2277101493633e5dc7c00e56de494972548 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_wait_for_compute_service, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:15:43 localhost podman[76140]: 2025-11-23 08:15:43.787943212 +0000 UTC m=+0.166575759 container attach a69494522d130f7e7b7298cc9b14e2277101493633e5dc7c00e56de494972548 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_wait_for_compute_service, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Nov 23 03:15:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:15:45 localhost podman[76162]: 2025-11-23 08:15:45.89910213 +0000 UTC m=+0.080057401 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Nov 23 03:15:45 localhost podman[76162]: 2025-11-23 08:15:45.908576491 +0000 UTC m=+0.089531732 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container) Nov 23 03:15:45 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:15:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:15:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:15:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:15:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:15:47 localhost podman[76189]: 2025-11-23 08:15:47.909222137 +0000 UTC m=+0.083559718 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, name=rhosp17/openstack-cron, version=17.1.12, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:15:47 localhost podman[76181]: 2025-11-23 08:15:47.961156606 +0000 UTC m=+0.145514014 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1761123044, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4) Nov 23 03:15:47 localhost podman[76189]: 2025-11-23 08:15:47.971228734 +0000 UTC m=+0.145566295 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, config_id=tripleo_step4, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Nov 23 03:15:47 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:15:47 localhost podman[76181]: 2025-11-23 08:15:47.993352702 +0000 UTC m=+0.177710090 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Nov 23 03:15:48 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:15:48 localhost podman[76182]: 2025-11-23 08:15:48.065452048 +0000 UTC m=+0.245352140 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:15:48 localhost podman[76183]: 2025-11-23 08:15:48.113678384 +0000 UTC m=+0.291895254 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:15:48 localhost podman[76182]: 2025-11-23 08:15:48.14262843 +0000 UTC m=+0.322528512 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64) Nov 23 03:15:48 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:15:48 localhost podman[76183]: 2025-11-23 08:15:48.513256322 +0000 UTC m=+0.691473242 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1) Nov 23 03:15:48 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:15:48 localhost systemd[1]: tmp-crun.loAzZe.mount: Deactivated successfully. Nov 23 03:15:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:15:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:15:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:15:51 localhost podman[76280]: 2025-11-23 08:15:51.905195887 +0000 UTC m=+0.086988103 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, distribution-scope=public, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:15:51 localhost podman[76280]: 2025-11-23 08:15:51.954680672 +0000 UTC m=+0.136472968 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, vcs-type=git, batch=17.1_20251118.1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044) Nov 23 03:15:51 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:15:52 localhost podman[76281]: 2025-11-23 08:15:52.0056114 +0000 UTC m=+0.183919000 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044) Nov 23 03:15:52 localhost podman[76279]: 2025-11-23 08:15:51.958522799 +0000 UTC m=+0.143720939 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc.) Nov 23 03:15:52 localhost podman[76281]: 2025-11-23 08:15:52.080335037 +0000 UTC m=+0.258642607 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible) Nov 23 03:15:52 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:15:52 localhost podman[76279]: 2025-11-23 08:15:52.146303746 +0000 UTC m=+0.331501916 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:15:52 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:15:53 localhost systemd[1]: Stopping User Manager for UID 0... Nov 23 03:15:53 localhost systemd[76049]: Activating special unit Exit the Session... Nov 23 03:15:53 localhost systemd[76049]: Stopped target Main User Target. Nov 23 03:15:53 localhost systemd[76049]: Stopped target Basic System. Nov 23 03:15:53 localhost systemd[76049]: Stopped target Paths. Nov 23 03:15:53 localhost systemd[76049]: Stopped target Sockets. Nov 23 03:15:53 localhost systemd[76049]: Stopped target Timers. Nov 23 03:15:53 localhost systemd[76049]: Stopped Daily Cleanup of User's Temporary Directories. Nov 23 03:15:53 localhost systemd[76049]: Closed D-Bus User Message Bus Socket. Nov 23 03:15:53 localhost systemd[76049]: Stopped Create User's Volatile Files and Directories. Nov 23 03:15:53 localhost systemd[76049]: Removed slice User Application Slice. Nov 23 03:15:53 localhost systemd[76049]: Reached target Shutdown. Nov 23 03:15:53 localhost systemd[76049]: Finished Exit the Session. Nov 23 03:15:53 localhost systemd[76049]: Reached target Exit the Session. Nov 23 03:15:53 localhost systemd[1]: user@0.service: Deactivated successfully. Nov 23 03:15:53 localhost systemd[1]: Stopped User Manager for UID 0. Nov 23 03:15:53 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Nov 23 03:15:53 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Nov 23 03:15:53 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Nov 23 03:15:53 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Nov 23 03:15:53 localhost systemd[1]: Removed slice User Slice of UID 0. Nov 23 03:16:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:16:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:16:13 localhost podman[76434]: 2025-11-23 08:16:13.909762748 +0000 UTC m=+0.094410271 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, name=rhosp17/openstack-collectd, version=17.1.12, container_name=collectd, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:16:13 localhost podman[76434]: 2025-11-23 08:16:13.919345682 +0000 UTC m=+0.103993175 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1) Nov 23 03:16:13 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:16:14 localhost podman[76435]: 2025-11-23 08:16:14.010276204 +0000 UTC m=+0.194623427 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git) Nov 23 03:16:14 localhost podman[76435]: 2025-11-23 08:16:14.067791834 +0000 UTC m=+0.252139087 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container) Nov 23 03:16:14 localhost podman[76435]: unhealthy Nov 23 03:16:14 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:16:14 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed with result 'exit-code'. Nov 23 03:16:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:16:16 localhost podman[76476]: 2025-11-23 08:16:16.878815871 +0000 UTC m=+0.066002782 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Nov 23 03:16:16 localhost podman[76476]: 2025-11-23 08:16:16.919365191 +0000 UTC m=+0.106552042 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Nov 23 03:16:16 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:16:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:16:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:16:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:16:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:16:18 localhost podman[76495]: 2025-11-23 08:16:18.890286058 +0000 UTC m=+0.079844955 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute) Nov 23 03:16:18 localhost podman[76497]: 2025-11-23 08:16:18.948785148 +0000 UTC m=+0.131256088 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, container_name=nova_migration_target, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Nov 23 03:16:19 localhost podman[76496]: 2025-11-23 08:16:19.011100075 +0000 UTC m=+0.196452643 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi) Nov 23 03:16:19 localhost podman[76496]: 2025-11-23 08:16:19.044207209 +0000 UTC m=+0.229559747 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:16:19 localhost podman[76503]: 2025-11-23 08:16:19.050740418 +0000 UTC m=+0.230397081 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, version=17.1.12, name=rhosp17/openstack-cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, build-date=2025-11-18T22:49:32Z) Nov 23 03:16:19 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:16:19 localhost podman[76503]: 2025-11-23 08:16:19.062356284 +0000 UTC m=+0.242012997 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.) Nov 23 03:16:19 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:16:19 localhost podman[76495]: 2025-11-23 08:16:19.077466787 +0000 UTC m=+0.267025684 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, release=1761123044, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, version=17.1.12) Nov 23 03:16:19 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:16:19 localhost podman[76497]: 2025-11-23 08:16:19.351371099 +0000 UTC m=+0.533841999 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step4, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 23 03:16:19 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:16:19 localhost systemd[1]: tmp-crun.fy0x5U.mount: Deactivated successfully. Nov 23 03:16:19 localhost sshd[76591]: main: sshd: ssh-rsa algorithm is disabled Nov 23 03:16:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:16:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:16:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:16:22 localhost systemd[1]: tmp-crun.02RT4S.mount: Deactivated successfully. Nov 23 03:16:22 localhost podman[76593]: 2025-11-23 08:16:22.837432264 +0000 UTC m=+0.085982223 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_id=tripleo_step1, vcs-type=git, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team) Nov 23 03:16:22 localhost systemd[1]: tmp-crun.wsbBHm.mount: Deactivated successfully. Nov 23 03:16:22 localhost podman[76594]: 2025-11-23 08:16:22.891438596 +0000 UTC m=+0.136715245 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller) Nov 23 03:16:22 localhost podman[76595]: 2025-11-23 08:16:22.926572372 +0000 UTC m=+0.169433207 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, container_name=ovn_metadata_agent, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc.) Nov 23 03:16:22 localhost podman[76594]: 2025-11-23 08:16:22.962382078 +0000 UTC m=+0.207658727 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, container_name=ovn_controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12) Nov 23 03:16:22 localhost podman[76595]: 2025-11-23 08:16:22.971352732 +0000 UTC m=+0.214213607 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:16:22 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:16:22 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:16:23 localhost podman[76593]: 2025-11-23 08:16:23.065367159 +0000 UTC m=+0.313917028 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=metrics_qdr, architecture=x86_64, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:16:23 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:16:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:16:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:16:44 localhost podman[76674]: 2025-11-23 08:16:44.907591064 +0000 UTC m=+0.091689166 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, container_name=nova_compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true) Nov 23 03:16:44 localhost podman[76674]: 2025-11-23 08:16:44.962178935 +0000 UTC m=+0.146277097 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5) Nov 23 03:16:44 localhost podman[76674]: unhealthy Nov 23 03:16:44 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:16:44 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed with result 'exit-code'. Nov 23 03:16:45 localhost podman[76673]: 2025-11-23 08:16:45.03945469 +0000 UTC m=+0.229495845 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, release=1761123044, name=rhosp17/openstack-collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3) Nov 23 03:16:45 localhost podman[76673]: 2025-11-23 08:16:45.077510065 +0000 UTC m=+0.267551210 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=collectd, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:16:45 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:16:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:16:47 localhost podman[76715]: 2025-11-23 08:16:47.891308136 +0000 UTC m=+0.078460022 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:16:47 localhost podman[76715]: 2025-11-23 08:16:47.92835579 +0000 UTC m=+0.115507676 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, container_name=iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:16:47 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:16:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:16:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:16:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:16:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:16:49 localhost systemd[1]: tmp-crun.I2hE3W.mount: Deactivated successfully. Nov 23 03:16:49 localhost systemd[1]: tmp-crun.rd5xmU.mount: Deactivated successfully. Nov 23 03:16:49 localhost podman[76733]: 2025-11-23 08:16:49.903668091 +0000 UTC m=+0.087507790 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Nov 23 03:16:49 localhost podman[76734]: 2025-11-23 08:16:49.958404445 +0000 UTC m=+0.139672295 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, batch=17.1_20251118.1) Nov 23 03:16:50 localhost podman[76735]: 2025-11-23 08:16:50.007776957 +0000 UTC m=+0.186754307 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, container_name=nova_migration_target, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z) Nov 23 03:16:50 localhost podman[76736]: 2025-11-23 08:16:49.927078747 +0000 UTC m=+0.102752666 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, container_name=logrotate_crond, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Nov 23 03:16:50 localhost podman[76733]: 2025-11-23 08:16:50.036426713 +0000 UTC m=+0.220266452 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, distribution-scope=public, config_id=tripleo_step4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:16:50 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:16:50 localhost podman[76736]: 2025-11-23 08:16:50.063402109 +0000 UTC m=+0.239076088 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:16:50 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:16:50 localhost podman[76734]: 2025-11-23 08:16:50.087469295 +0000 UTC m=+0.268737145 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1761123044) Nov 23 03:16:50 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:16:50 localhost podman[76735]: 2025-11-23 08:16:50.385361282 +0000 UTC m=+0.564338612 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:16:50 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:16:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:16:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:16:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:16:53 localhost podman[76828]: 2025-11-23 08:16:53.893496203 +0000 UTC m=+0.077888925 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 23 03:16:53 localhost podman[76829]: 2025-11-23 08:16:53.947930118 +0000 UTC m=+0.127951026 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044) Nov 23 03:16:54 localhost podman[76827]: 2025-11-23 08:16:53.999723364 +0000 UTC m=+0.186328354 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step1, release=1761123044, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64) Nov 23 03:16:54 localhost podman[76829]: 2025-11-23 08:16:54.016382593 +0000 UTC m=+0.196403521 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 23 03:16:54 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:16:54 localhost podman[76828]: 2025-11-23 08:16:54.050405424 +0000 UTC m=+0.234798166 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller) Nov 23 03:16:54 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:16:54 localhost podman[76827]: 2025-11-23 08:16:54.195264148 +0000 UTC m=+0.381869058 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:16:54 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:17:00 localhost systemd[1]: session-27.scope: Deactivated successfully. Nov 23 03:17:00 localhost systemd[1]: session-27.scope: Consumed 2.810s CPU time. Nov 23 03:17:00 localhost systemd-logind[760]: Session 27 logged out. Waiting for processes to exit. Nov 23 03:17:00 localhost systemd-logind[760]: Removed session 27. Nov 23 03:17:01 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:17:01 localhost recover_tripleo_nova_virtqemud[76905]: 62093 Nov 23 03:17:01 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:17:01 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:17:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:17:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:17:15 localhost systemd[1]: tmp-crun.RzaKqk.mount: Deactivated successfully. Nov 23 03:17:15 localhost podman[76982]: 2025-11-23 08:17:15.921586234 +0000 UTC m=+0.099877177 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z) Nov 23 03:17:15 localhost podman[76982]: 2025-11-23 08:17:15.959740312 +0000 UTC m=+0.138031195 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:17:15 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:17:16 localhost podman[76983]: 2025-11-23 08:17:16.01491058 +0000 UTC m=+0.193030278 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:17:16 localhost podman[76983]: 2025-11-23 08:17:16.071910425 +0000 UTC m=+0.250030123 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, managed_by=tripleo_ansible) Nov 23 03:17:16 localhost podman[76983]: unhealthy Nov 23 03:17:16 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:17:16 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed with result 'exit-code'. Nov 23 03:17:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:17:18 localhost podman[77026]: 2025-11-23 08:17:18.906642437 +0000 UTC m=+0.089131889 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, release=1761123044, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:17:18 localhost podman[77026]: 2025-11-23 08:17:18.945509907 +0000 UTC m=+0.127999399 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:17:18 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:17:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:17:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:17:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:17:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:17:20 localhost podman[77046]: 2025-11-23 08:17:20.908444748 +0000 UTC m=+0.095146863 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, container_name=ceilometer_agent_compute, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=) Nov 23 03:17:20 localhost podman[77046]: 2025-11-23 08:17:20.946372239 +0000 UTC m=+0.133074324 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1) Nov 23 03:17:20 localhost podman[77047]: 2025-11-23 08:17:20.954928241 +0000 UTC m=+0.137779077 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z) Nov 23 03:17:20 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:17:21 localhost systemd[1]: tmp-crun.rf4eu2.mount: Deactivated successfully. Nov 23 03:17:21 localhost podman[77047]: 2025-11-23 08:17:21.017603319 +0000 UTC m=+0.200454145 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4) Nov 23 03:17:21 localhost podman[77048]: 2025-11-23 08:17:21.0179634 +0000 UTC m=+0.197291969 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Nov 23 03:17:21 localhost podman[77049]: 2025-11-23 08:17:21.06467917 +0000 UTC m=+0.240301596 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Nov 23 03:17:21 localhost podman[77049]: 2025-11-23 08:17:21.077263655 +0000 UTC m=+0.252886101 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-cron-container) Nov 23 03:17:21 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:17:21 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:17:21 localhost podman[77048]: 2025-11-23 08:17:21.408570504 +0000 UTC m=+0.587899103 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, container_name=nova_migration_target, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:17:21 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:17:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:17:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:17:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:17:24 localhost systemd[1]: tmp-crun.MhT7qf.mount: Deactivated successfully. Nov 23 03:17:24 localhost podman[77139]: 2025-11-23 08:17:24.919799379 +0000 UTC m=+0.099865917 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-ovn-controller, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 23 03:17:24 localhost podman[77139]: 2025-11-23 08:17:24.948813688 +0000 UTC m=+0.128880226 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 23 03:17:24 localhost systemd[1]: tmp-crun.D7Sf5b.mount: Deactivated successfully. Nov 23 03:17:24 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:17:24 localhost podman[77138]: 2025-11-23 08:17:24.967330244 +0000 UTC m=+0.150398063 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr) Nov 23 03:17:25 localhost podman[77140]: 2025-11-23 08:17:25.017164679 +0000 UTC m=+0.194236305 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=) Nov 23 03:17:25 localhost podman[77140]: 2025-11-23 08:17:25.093399332 +0000 UTC m=+0.270470928 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Nov 23 03:17:25 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:17:25 localhost podman[77138]: 2025-11-23 08:17:25.159426613 +0000 UTC m=+0.342494432 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:17:25 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:17:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:17:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:17:46 localhost podman[77214]: 2025-11-23 08:17:46.899464857 +0000 UTC m=+0.081875177 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:17:46 localhost podman[77214]: 2025-11-23 08:17:46.911367971 +0000 UTC m=+0.093778271 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:17:46 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:17:46 localhost systemd[1]: tmp-crun.lY0Cjc.mount: Deactivated successfully. Nov 23 03:17:46 localhost podman[77215]: 2025-11-23 08:17:46.956621276 +0000 UTC m=+0.137260481 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=) Nov 23 03:17:47 localhost podman[77215]: 2025-11-23 08:17:47.001378026 +0000 UTC m=+0.182017231 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public) Nov 23 03:17:47 localhost podman[77215]: unhealthy Nov 23 03:17:47 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:17:47 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed with result 'exit-code'. Nov 23 03:17:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:17:49 localhost podman[77256]: 2025-11-23 08:17:49.901685225 +0000 UTC m=+0.082795715 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, distribution-scope=public, managed_by=tripleo_ansible, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Nov 23 03:17:49 localhost podman[77256]: 2025-11-23 08:17:49.909769632 +0000 UTC m=+0.090880122 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, tcib_managed=true, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 23 03:17:49 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:17:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:17:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:17:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:17:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:17:51 localhost systemd[1]: tmp-crun.ULUk8o.mount: Deactivated successfully. Nov 23 03:17:51 localhost podman[77277]: 2025-11-23 08:17:51.915540065 +0000 UTC m=+0.097806174 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, architecture=x86_64, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 23 03:17:51 localhost podman[77277]: 2025-11-23 08:17:51.949392361 +0000 UTC m=+0.131658460 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Nov 23 03:17:51 localhost systemd[1]: tmp-crun.mKpiwa.mount: Deactivated successfully. Nov 23 03:17:51 localhost podman[77279]: 2025-11-23 08:17:51.968721183 +0000 UTC m=+0.144943447 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, version=17.1.12, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:17:51 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:17:52 localhost podman[77278]: 2025-11-23 08:17:52.0163441 +0000 UTC m=+0.193761500 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public) Nov 23 03:17:52 localhost podman[77278]: 2025-11-23 08:17:52.063614427 +0000 UTC m=+0.241031797 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Nov 23 03:17:52 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:17:52 localhost podman[77280]: 2025-11-23 08:17:52.077594895 +0000 UTC m=+0.247642661 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, vcs-type=git, com.redhat.component=openstack-cron-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4) Nov 23 03:17:52 localhost podman[77280]: 2025-11-23 08:17:52.110542422 +0000 UTC m=+0.280590208 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public) Nov 23 03:17:52 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:17:52 localhost podman[77279]: 2025-11-23 08:17:52.334543218 +0000 UTC m=+0.510765472 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=) Nov 23 03:17:52 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:17:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:17:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:17:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:17:55 localhost podman[77371]: 2025-11-23 08:17:55.912353011 +0000 UTC m=+0.093910505 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:17:55 localhost podman[77371]: 2025-11-23 08:17:55.941255296 +0000 UTC m=+0.122812710 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, vcs-type=git, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, container_name=ovn_controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:17:55 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:17:55 localhost podman[77370]: 2025-11-23 08:17:55.960862276 +0000 UTC m=+0.145924327 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, container_name=metrics_qdr, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:17:56 localhost podman[77372]: 2025-11-23 08:17:56.014823007 +0000 UTC m=+0.190486270 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:17:56 localhost podman[77372]: 2025-11-23 08:17:56.08551986 +0000 UTC m=+0.261183113 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible) Nov 23 03:17:56 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:17:56 localhost podman[77370]: 2025-11-23 08:17:56.191018459 +0000 UTC m=+0.376080460 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true) Nov 23 03:17:56 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:18:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:18:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:18:17 localhost podman[77526]: 2025-11-23 08:18:17.894323532 +0000 UTC m=+0.079727703 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute) Nov 23 03:18:17 localhost podman[77525]: 2025-11-23 08:18:17.866859003 +0000 UTC m=+0.054875055 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step3, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp17/openstack-collectd) Nov 23 03:18:17 localhost podman[77526]: 2025-11-23 08:18:17.93636926 +0000 UTC m=+0.121773501 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=) Nov 23 03:18:17 localhost podman[77526]: unhealthy Nov 23 03:18:17 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:18:17 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed with result 'exit-code'. Nov 23 03:18:17 localhost podman[77525]: 2025-11-23 08:18:17.951268241 +0000 UTC m=+0.139284343 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_step3, container_name=collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, version=17.1.12, release=1761123044, name=rhosp17/openstack-collectd, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-collectd-container) Nov 23 03:18:17 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:18:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:18:20 localhost podman[77567]: 2025-11-23 08:18:20.90104117 +0000 UTC m=+0.091086905 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T23:44:13Z) Nov 23 03:18:20 localhost podman[77567]: 2025-11-23 08:18:20.915415844 +0000 UTC m=+0.105461579 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, tcib_managed=true, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team) Nov 23 03:18:20 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:18:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:18:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:18:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:18:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:18:22 localhost systemd[1]: tmp-crun.GpeVLb.mount: Deactivated successfully. Nov 23 03:18:22 localhost podman[77586]: 2025-11-23 08:18:22.899427246 +0000 UTC m=+0.083014935 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true) Nov 23 03:18:22 localhost systemd[1]: tmp-crun.cuUTqe.mount: Deactivated successfully. Nov 23 03:18:22 localhost podman[77587]: 2025-11-23 08:18:22.976104374 +0000 UTC m=+0.153274564 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Nov 23 03:18:23 localhost podman[77586]: 2025-11-23 08:18:23.002136069 +0000 UTC m=+0.185723758 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1761123044) Nov 23 03:18:23 localhost podman[77587]: 2025-11-23 08:18:23.012350994 +0000 UTC m=+0.189521224 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, release=1761123044) Nov 23 03:18:23 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:18:23 localhost podman[77588]: 2025-11-23 08:18:23.026548562 +0000 UTC m=+0.201674509 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4) Nov 23 03:18:23 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:18:23 localhost podman[77594]: 2025-11-23 08:18:22.97758354 +0000 UTC m=+0.150245131 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-cron, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-cron-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Nov 23 03:18:23 localhost podman[77594]: 2025-11-23 08:18:23.10743852 +0000 UTC m=+0.280100101 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:18:23 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:18:23 localhost podman[77588]: 2025-11-23 08:18:23.384513817 +0000 UTC m=+0.559639764 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, distribution-scope=public, vcs-type=git, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, release=1761123044, vendor=Red Hat, Inc.) Nov 23 03:18:23 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:18:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:18:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:18:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:18:26 localhost systemd[1]: tmp-crun.58rARd.mount: Deactivated successfully. Nov 23 03:18:26 localhost podman[77678]: 2025-11-23 08:18:26.903126014 +0000 UTC m=+0.092545460 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 23 03:18:26 localhost podman[77679]: 2025-11-23 08:18:26.955204012 +0000 UTC m=+0.139728236 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 23 03:18:27 localhost podman[77679]: 2025-11-23 08:18:27.000419029 +0000 UTC m=+0.184943233 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-ovn-controller, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible) Nov 23 03:18:27 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:18:27 localhost podman[77680]: 2025-11-23 08:18:27.027829725 +0000 UTC m=+0.208689826 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team) Nov 23 03:18:27 localhost podman[77678]: 2025-11-23 08:18:27.099139177 +0000 UTC m=+0.288558613 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Nov 23 03:18:27 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:18:27 localhost podman[77680]: 2025-11-23 08:18:27.150368649 +0000 UTC m=+0.331228770 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1) Nov 23 03:18:27 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:18:27 localhost systemd[1]: tmp-crun.S51s5M.mount: Deactivated successfully. Nov 23 03:18:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:18:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:18:48 localhost podman[77820]: 2025-11-23 08:18:48.903179937 +0000 UTC m=+0.089885257 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:18:48 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:18:48 localhost podman[77821]: 2025-11-23 08:18:48.939169179 +0000 UTC m=+0.126210219 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, url=https://www.redhat.com) Nov 23 03:18:48 localhost recover_tripleo_nova_virtqemud[77855]: 62093 Nov 23 03:18:48 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:18:48 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:18:48 localhost podman[77820]: 2025-11-23 08:18:48.967448812 +0000 UTC m=+0.154154182 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, container_name=collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:18:48 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:18:49 localhost podman[77821]: 2025-11-23 08:18:49.018384975 +0000 UTC m=+0.205426005 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 23 03:18:49 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:18:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:18:51 localhost podman[77890]: 2025-11-23 08:18:51.903826677 +0000 UTC m=+0.085423219 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, container_name=iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 23 03:18:51 localhost podman[77890]: 2025-11-23 08:18:51.912295418 +0000 UTC m=+0.093892010 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, vcs-type=git) Nov 23 03:18:51 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:18:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:18:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:18:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:18:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:18:53 localhost systemd[1]: tmp-crun.edwFs5.mount: Deactivated successfully. Nov 23 03:18:53 localhost podman[77911]: 2025-11-23 08:18:53.957323386 +0000 UTC m=+0.117815540 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.expose-services=) Nov 23 03:18:53 localhost podman[77910]: 2025-11-23 08:18:53.917396743 +0000 UTC m=+0.083312303 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4) Nov 23 03:18:53 localhost podman[77909]: 2025-11-23 08:18:53.975089035 +0000 UTC m=+0.143681188 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12) Nov 23 03:18:54 localhost podman[77912]: 2025-11-23 08:18:54.025089999 +0000 UTC m=+0.184086826 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z) Nov 23 03:18:54 localhost podman[77912]: 2025-11-23 08:18:54.03257391 +0000 UTC m=+0.191570747 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, tcib_managed=true, container_name=logrotate_crond, version=17.1.12, name=rhosp17/openstack-cron, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container) Nov 23 03:18:54 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:18:54 localhost podman[77910]: 2025-11-23 08:18:54.054715224 +0000 UTC m=+0.220630754 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container) Nov 23 03:18:54 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:18:54 localhost podman[77909]: 2025-11-23 08:18:54.075536757 +0000 UTC m=+0.244128920 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Nov 23 03:18:54 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:18:54 localhost podman[77911]: 2025-11-23 08:18:54.289436603 +0000 UTC m=+0.449928807 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:18:54 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:18:55 localhost systemd[1]: libpod-a69494522d130f7e7b7298cc9b14e2277101493633e5dc7c00e56de494972548.scope: Deactivated successfully. Nov 23 03:18:55 localhost podman[78008]: 2025-11-23 08:18:55.968292681 +0000 UTC m=+0.054850514 container died a69494522d130f7e7b7298cc9b14e2277101493633e5dc7c00e56de494972548 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, container_name=nova_wait_for_compute_service, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:18:55 localhost systemd[1]: tmp-crun.iXNSpB.mount: Deactivated successfully. Nov 23 03:18:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a69494522d130f7e7b7298cc9b14e2277101493633e5dc7c00e56de494972548-userdata-shm.mount: Deactivated successfully. Nov 23 03:18:56 localhost systemd[1]: var-lib-containers-storage-overlay-622ea5b2f6fbe5d9b292df85d50e445712f85ed6230930160a21086a3d12c064-merged.mount: Deactivated successfully. Nov 23 03:18:56 localhost podman[78008]: 2025-11-23 08:18:56.011263879 +0000 UTC m=+0.097821662 container cleanup a69494522d130f7e7b7298cc9b14e2277101493633e5dc7c00e56de494972548 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_wait_for_compute_service, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc.) Nov 23 03:18:56 localhost systemd[1]: libpod-conmon-a69494522d130f7e7b7298cc9b14e2277101493633e5dc7c00e56de494972548.scope: Deactivated successfully. Nov 23 03:18:56 localhost python3[75959]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=b43218eec4380850a20e0a337fdcf6cf --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 23 03:18:56 localhost python3[78063]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:18:56 localhost python3[78079]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 23 03:18:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:18:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:18:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:18:57 localhost systemd[1]: tmp-crun.z1zd15.mount: Deactivated successfully. Nov 23 03:18:57 localhost podman[78141]: 2025-11-23 08:18:57.509121747 +0000 UTC m=+0.079011961 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12) Nov 23 03:18:57 localhost podman[78142]: 2025-11-23 08:18:57.519327452 +0000 UTC m=+0.083091017 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Nov 23 03:18:57 localhost podman[78142]: 2025-11-23 08:18:57.545371307 +0000 UTC m=+0.109134932 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container) Nov 23 03:18:57 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:18:57 localhost python3[78140]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885936.9748197-118470-231726331101293/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:18:57 localhost podman[78145]: 2025-11-23 08:18:57.636301405 +0000 UTC m=+0.195284042 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Nov 23 03:18:57 localhost podman[78145]: 2025-11-23 08:18:57.704564683 +0000 UTC m=+0.263547300 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 23 03:18:57 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:18:57 localhost podman[78141]: 2025-11-23 08:18:57.73649136 +0000 UTC m=+0.306381574 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, config_id=tripleo_step1, architecture=x86_64, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:18:57 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:18:57 localhost python3[78232]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 03:18:57 localhost systemd[1]: Reloading. Nov 23 03:18:58 localhost systemd-sysv-generator[78266]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:18:58 localhost systemd-rc-local-generator[78260]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:18:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:18:59 localhost python3[78287]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:18:59 localhost systemd[1]: Reloading. Nov 23 03:18:59 localhost systemd-rc-local-generator[78315]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:18:59 localhost systemd-sysv-generator[78320]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:18:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:18:59 localhost systemd[1]: Starting nova_compute container... Nov 23 03:18:59 localhost tripleo-start-podman-container[78327]: Creating additional drop-in dependency for "nova_compute" (bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5) Nov 23 03:18:59 localhost systemd[1]: Reloading. Nov 23 03:18:59 localhost systemd-sysv-generator[78391]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:18:59 localhost systemd-rc-local-generator[78386]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:18:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:18:59 localhost systemd[1]: Started nova_compute container. Nov 23 03:19:00 localhost python3[78427]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:19:01 localhost python3[78548]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005532584 step=5 update_config_hash_only=False Nov 23 03:19:02 localhost python3[78564]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 03:19:02 localhost python3[78580]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True Nov 23 03:19:07 localhost sshd[78581]: main: sshd: ssh-rsa algorithm is disabled Nov 23 03:19:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:19:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:19:19 localhost podman[78660]: 2025-11-23 08:19:19.909294528 +0000 UTC m=+0.095554062 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:19:19 localhost podman[78661]: 2025-11-23 08:19:19.95791198 +0000 UTC m=+0.144165334 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.openshift.expose-services=, release=1761123044, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4) Nov 23 03:19:19 localhost podman[78660]: 2025-11-23 08:19:19.978915268 +0000 UTC m=+0.165174762 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, version=17.1.12, config_id=tripleo_step3, name=rhosp17/openstack-collectd, release=1761123044, tcib_managed=true, com.redhat.component=openstack-collectd-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:19:19 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:19:20 localhost podman[78661]: 2025-11-23 08:19:20.015954383 +0000 UTC m=+0.202207667 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:19:20 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:19:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:19:22 localhost podman[78704]: 2025-11-23 08:19:22.903427807 +0000 UTC m=+0.093159628 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 23 03:19:22 localhost podman[78704]: 2025-11-23 08:19:22.935980842 +0000 UTC m=+0.125712613 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, container_name=iscsid) Nov 23 03:19:22 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:19:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:19:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:19:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:19:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:19:24 localhost systemd[1]: tmp-crun.E7GWjg.mount: Deactivated successfully. Nov 23 03:19:24 localhost podman[78723]: 2025-11-23 08:19:24.904700063 +0000 UTC m=+0.089570687 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, build-date=2025-11-19T00:11:48Z, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, vcs-type=git) Nov 23 03:19:24 localhost podman[78723]: 2025-11-23 08:19:24.965407917 +0000 UTC m=+0.150278551 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., tcib_managed=true, container_name=ceilometer_agent_compute) Nov 23 03:19:24 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:19:25 localhost podman[78724]: 2025-11-23 08:19:25.00370157 +0000 UTC m=+0.185369875 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true) Nov 23 03:19:25 localhost podman[78726]: 2025-11-23 08:19:24.966477241 +0000 UTC m=+0.141510232 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, container_name=logrotate_crond, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:19:25 localhost podman[78726]: 2025-11-23 08:19:25.046056558 +0000 UTC m=+0.221089599 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, release=1761123044, com.redhat.component=openstack-cron-container, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:19:25 localhost podman[78725]: 2025-11-23 08:19:25.053727496 +0000 UTC m=+0.229929492 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Nov 23 03:19:25 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:19:25 localhost podman[78724]: 2025-11-23 08:19:25.145643004 +0000 UTC m=+0.327311309 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20251118.1, tcib_managed=true, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, vcs-type=git, distribution-scope=public, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 23 03:19:25 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:19:25 localhost podman[78725]: 2025-11-23 08:19:25.452681536 +0000 UTC m=+0.628883622 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, vcs-type=git, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z) Nov 23 03:19:25 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:19:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:19:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:19:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:19:27 localhost systemd[1]: tmp-crun.wZcbKk.mount: Deactivated successfully. Nov 23 03:19:27 localhost podman[78819]: 2025-11-23 08:19:27.913401152 +0000 UTC m=+0.090042572 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, release=1761123044, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com) Nov 23 03:19:27 localhost podman[78818]: 2025-11-23 08:19:27.961944061 +0000 UTC m=+0.141899843 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller) Nov 23 03:19:27 localhost podman[78819]: 2025-11-23 08:19:27.988448239 +0000 UTC m=+0.165089659 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:19:28 localhost podman[78817]: 2025-11-23 08:19:27.999136809 +0000 UTC m=+0.182730223 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:19:28 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:19:28 localhost podman[78818]: 2025-11-23 08:19:28.016526496 +0000 UTC m=+0.196482248 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container) Nov 23 03:19:28 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:19:28 localhost podman[78817]: 2025-11-23 08:19:28.192841521 +0000 UTC m=+0.376434965 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:19:28 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:19:28 localhost systemd[1]: tmp-crun.LTDZ6f.mount: Deactivated successfully. Nov 23 03:19:32 localhost sshd[78894]: main: sshd: ssh-rsa algorithm is disabled Nov 23 03:19:32 localhost systemd-logind[760]: New session 33 of user zuul. Nov 23 03:19:32 localhost systemd[1]: Started Session 33 of User zuul. Nov 23 03:19:33 localhost python3[79003]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 03:19:40 localhost python3[79267]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None Nov 23 03:19:47 localhost python3[79405]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None Nov 23 03:19:47 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled Nov 23 03:19:47 localhost systemd-journald[47422]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation. Nov 23 03:19:47 localhost systemd-journald[47422]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 23 03:19:47 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 03:19:47 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 03:19:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:19:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:19:50 localhost podman[79430]: 2025-11-23 08:19:50.918164775 +0000 UTC m=+0.098859405 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute) Nov 23 03:19:50 localhost podman[79430]: 2025-11-23 08:19:50.949059829 +0000 UTC m=+0.129754499 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, version=17.1.12, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Nov 23 03:19:50 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:19:50 localhost podman[79429]: 2025-11-23 08:19:50.972309577 +0000 UTC m=+0.152304626 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc.) Nov 23 03:19:51 localhost podman[79429]: 2025-11-23 08:19:51.009243137 +0000 UTC m=+0.189238186 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=collectd, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:19:51 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:19:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:19:53 localhost podman[79474]: 2025-11-23 08:19:53.900045405 +0000 UTC m=+0.085150441 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:19:53 localhost podman[79474]: 2025-11-23 08:19:53.938280966 +0000 UTC m=+0.123385972 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, vcs-type=git, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., io.openshift.expose-services=) Nov 23 03:19:53 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:19:55 localhost podman[79494]: 2025-11-23 08:19:55.916316544 +0000 UTC m=+0.094466208 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, version=17.1.12, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z) Nov 23 03:19:55 localhost podman[79494]: 2025-11-23 08:19:55.950382826 +0000 UTC m=+0.128532470 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, version=17.1.12, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:19:55 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:19:56 localhost systemd[1]: tmp-crun.Y46aCx.mount: Deactivated successfully. Nov 23 03:19:56 localhost podman[79495]: 2025-11-23 08:19:56.027071784 +0000 UTC m=+0.199254664 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:19:56 localhost podman[79495]: 2025-11-23 08:19:56.065318455 +0000 UTC m=+0.237501335 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 23 03:19:56 localhost podman[79496]: 2025-11-23 08:19:56.02886862 +0000 UTC m=+0.199367358 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, build-date=2025-11-19T00:36:58Z, version=17.1.12, vcs-type=git, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target) Nov 23 03:19:56 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:19:56 localhost podman[79500]: 2025-11-23 08:19:56.085648723 +0000 UTC m=+0.253440747 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, name=rhosp17/openstack-cron, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=logrotate_crond, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:19:56 localhost podman[79500]: 2025-11-23 08:19:56.122493541 +0000 UTC m=+0.290285555 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Nov 23 03:19:56 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:19:56 localhost podman[79496]: 2025-11-23 08:19:56.391011073 +0000 UTC m=+0.561509811 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, tcib_managed=true, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git) Nov 23 03:19:56 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:19:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:19:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:19:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:19:58 localhost podman[79588]: 2025-11-23 08:19:58.900079282 +0000 UTC m=+0.082404946 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:19:58 localhost podman[79588]: 2025-11-23 08:19:58.948492497 +0000 UTC m=+0.130818151 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true) Nov 23 03:19:58 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:19:58 localhost podman[79586]: 2025-11-23 08:19:58.964843492 +0000 UTC m=+0.151270262 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vendor=Red Hat, Inc., config_id=tripleo_step1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=) Nov 23 03:19:59 localhost podman[79587]: 2025-11-23 08:19:59.02078453 +0000 UTC m=+0.204526307 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, container_name=ovn_controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:19:59 localhost podman[79587]: 2025-11-23 08:19:59.078377048 +0000 UTC m=+0.262118815 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller) Nov 23 03:19:59 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:19:59 localhost podman[79586]: 2025-11-23 08:19:59.162610779 +0000 UTC m=+0.349037489 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=metrics_qdr, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team) Nov 23 03:19:59 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:20:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:20:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:20:21 localhost podman[79742]: 2025-11-23 08:20:21.904194163 +0000 UTC m=+0.086575914 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.) Nov 23 03:20:21 localhost podman[79741]: 2025-11-23 08:20:21.959197432 +0000 UTC m=+0.142641427 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true) Nov 23 03:20:21 localhost podman[79741]: 2025-11-23 08:20:21.973378189 +0000 UTC m=+0.156822264 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4) Nov 23 03:20:21 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:20:22 localhost podman[79742]: 2025-11-23 08:20:22.012979362 +0000 UTC m=+0.195361123 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:20:22 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:20:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:20:24 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:20:24 localhost recover_tripleo_nova_virtqemud[79785]: 62093 Nov 23 03:20:24 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:20:24 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:20:24 localhost podman[79783]: 2025-11-23 08:20:24.895470184 +0000 UTC m=+0.083141868 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, container_name=iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:20:24 localhost podman[79783]: 2025-11-23 08:20:24.934408746 +0000 UTC m=+0.122080430 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, release=1761123044, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=) Nov 23 03:20:24 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:20:26 localhost podman[79805]: 2025-11-23 08:20:26.897946317 +0000 UTC m=+0.083487820 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Nov 23 03:20:26 localhost podman[79807]: 2025-11-23 08:20:26.953176493 +0000 UTC m=+0.133296408 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4) Nov 23 03:20:26 localhost podman[79805]: 2025-11-23 08:20:26.956458424 +0000 UTC m=+0.141999967 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, build-date=2025-11-19T00:11:48Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible) Nov 23 03:20:26 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:20:27 localhost podman[79806]: 2025-11-23 08:20:27.003663892 +0000 UTC m=+0.185024725 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=) Nov 23 03:20:27 localhost podman[79808]: 2025-11-23 08:20:27.069172995 +0000 UTC m=+0.246060861 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-cron, container_name=logrotate_crond, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:20:27 localhost podman[79808]: 2025-11-23 08:20:27.079574636 +0000 UTC m=+0.256462502 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, name=rhosp17/openstack-cron, tcib_managed=true, container_name=logrotate_crond, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Nov 23 03:20:27 localhost podman[79806]: 2025-11-23 08:20:27.091563226 +0000 UTC m=+0.272924059 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container) Nov 23 03:20:27 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:20:27 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:20:27 localhost podman[79807]: 2025-11-23 08:20:27.322399976 +0000 UTC m=+0.502519891 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Nov 23 03:20:27 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:20:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:20:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:20:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:20:29 localhost podman[79902]: 2025-11-23 08:20:29.904426457 +0000 UTC m=+0.085208822 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, version=17.1.12) Nov 23 03:20:29 localhost podman[79904]: 2025-11-23 08:20:29.955750863 +0000 UTC m=+0.130804431 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, build-date=2025-11-19T00:14:25Z, vcs-type=git, io.openshift.expose-services=, container_name=ovn_metadata_agent, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044) Nov 23 03:20:30 localhost podman[79904]: 2025-11-23 08:20:30.003498887 +0000 UTC m=+0.178552505 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible) Nov 23 03:20:30 localhost podman[79903]: 2025-11-23 08:20:30.018809619 +0000 UTC m=+0.196464298 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, architecture=x86_64) Nov 23 03:20:30 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:20:30 localhost podman[79903]: 2025-11-23 08:20:30.066379089 +0000 UTC m=+0.244033738 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible) Nov 23 03:20:30 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:20:30 localhost podman[79902]: 2025-11-23 08:20:30.109300254 +0000 UTC m=+0.290082559 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Nov 23 03:20:30 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:20:47 localhost systemd[1]: session-33.scope: Deactivated successfully. Nov 23 03:20:47 localhost systemd[1]: session-33.scope: Consumed 5.809s CPU time. Nov 23 03:20:47 localhost systemd-logind[760]: Session 33 logged out. Waiting for processes to exit. Nov 23 03:20:47 localhost systemd-logind[760]: Removed session 33. Nov 23 03:20:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:20:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:20:52 localhost podman[80024]: 2025-11-23 08:20:52.913179795 +0000 UTC m=+0.098308108 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, architecture=x86_64, io.openshift.expose-services=, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible) Nov 23 03:20:52 localhost podman[80024]: 2025-11-23 08:20:52.921237533 +0000 UTC m=+0.106365776 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step3, name=rhosp17/openstack-collectd, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=) Nov 23 03:20:52 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:20:52 localhost podman[80025]: 2025-11-23 08:20:52.970248287 +0000 UTC m=+0.152300654 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, container_name=nova_compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:20:53 localhost podman[80025]: 2025-11-23 08:20:53.023332537 +0000 UTC m=+0.205384894 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, io.buildah.version=1.41.4, container_name=nova_compute, config_id=tripleo_step5, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute) Nov 23 03:20:53 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:20:55 localhost sshd[80071]: main: sshd: ssh-rsa algorithm is disabled Nov 23 03:20:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:20:55 localhost systemd-logind[760]: New session 34 of user zuul. Nov 23 03:20:55 localhost systemd[1]: Started Session 34 of User zuul. Nov 23 03:20:55 localhost podman[80073]: 2025-11-23 08:20:55.344191902 +0000 UTC m=+0.084554832 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=) Nov 23 03:20:55 localhost podman[80073]: 2025-11-23 08:20:55.384593621 +0000 UTC m=+0.124956501 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, release=1761123044, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.) Nov 23 03:20:55 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:20:55 localhost python3[80111]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 03:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:20:57 localhost podman[80113]: 2025-11-23 08:20:57.904448931 +0000 UTC m=+0.089754092 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:20:57 localhost podman[80116]: 2025-11-23 08:20:57.967962753 +0000 UTC m=+0.144642298 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:20:57 localhost podman[80116]: 2025-11-23 08:20:57.978173119 +0000 UTC m=+0.154852654 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, container_name=logrotate_crond, config_id=tripleo_step4, release=1761123044, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Nov 23 03:20:57 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:20:57 localhost podman[80113]: 2025-11-23 08:20:57.9953693 +0000 UTC m=+0.180674461 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, architecture=x86_64, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:20:58 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:20:58 localhost podman[80115]: 2025-11-23 08:20:58.06273553 +0000 UTC m=+0.240182478 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute) Nov 23 03:20:58 localhost podman[80114]: 2025-11-23 08:20:57.931117906 +0000 UTC m=+0.111580748 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12) Nov 23 03:20:58 localhost podman[80114]: 2025-11-23 08:20:58.115359726 +0000 UTC m=+0.295822558 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible) Nov 23 03:20:58 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:20:58 localhost podman[80115]: 2025-11-23 08:20:58.454423497 +0000 UTC m=+0.631870465 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, url=https://www.redhat.com) Nov 23 03:20:58 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:20:58 localhost systemd[1]: tmp-crun.GxVP5J.mount: Deactivated successfully. Nov 23 03:21:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:21:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:21:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:21:00 localhost systemd[1]: tmp-crun.otUF1l.mount: Deactivated successfully. Nov 23 03:21:00 localhost podman[80202]: 2025-11-23 08:21:00.909739315 +0000 UTC m=+0.094414237 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4) Nov 23 03:21:00 localhost podman[80202]: 2025-11-23 08:21:00.935562372 +0000 UTC m=+0.120237304 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, architecture=x86_64, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, container_name=ovn_controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:21:00 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:21:00 localhost podman[80201]: 2025-11-23 08:21:00.952780794 +0000 UTC m=+0.137239679 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, release=1761123044, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, url=https://www.redhat.com) Nov 23 03:21:01 localhost podman[80203]: 2025-11-23 08:21:01.011396824 +0000 UTC m=+0.190501194 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 23 03:21:01 localhost podman[80203]: 2025-11-23 08:21:01.055311051 +0000 UTC m=+0.234415431 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, managed_by=tripleo_ansible) Nov 23 03:21:01 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:21:01 localhost podman[80201]: 2025-11-23 08:21:01.147318722 +0000 UTC m=+0.331777647 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:21:01 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:21:21 localhost python3[80370]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 23 03:21:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:21:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:21:23 localhost podman[80373]: 2025-11-23 08:21:23.917181981 +0000 UTC m=+0.096102589 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, container_name=nova_compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:21:23 localhost podman[80373]: 2025-11-23 08:21:23.948329223 +0000 UTC m=+0.127249861 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, distribution-scope=public, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, container_name=nova_compute, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step5) Nov 23 03:21:23 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:21:23 localhost podman[80372]: 2025-11-23 08:21:23.95925581 +0000 UTC m=+0.138663554 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, architecture=x86_64, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team) Nov 23 03:21:23 localhost podman[80372]: 2025-11-23 08:21:23.996463979 +0000 UTC m=+0.175871693 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:21:24 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:21:25 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 03:21:25 localhost systemd[1]: Starting man-db-cache-update.service... Nov 23 03:21:25 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 03:21:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:21:25 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 23 03:21:25 localhost systemd[1]: Finished man-db-cache-update.service. Nov 23 03:21:25 localhost systemd[1]: run-r73044534b52f43f99d3c117b9c6a29d5.service: Deactivated successfully. Nov 23 03:21:25 localhost systemd[1]: run-r94567544b965402788777330e8081a74.service: Deactivated successfully. Nov 23 03:21:25 localhost podman[80564]: 2025-11-23 08:21:25.791549728 +0000 UTC m=+0.086615166 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public) Nov 23 03:21:25 localhost podman[80564]: 2025-11-23 08:21:25.801686751 +0000 UTC m=+0.096752169 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com) Nov 23 03:21:25 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:21:28 localhost podman[80584]: 2025-11-23 08:21:28.910570193 +0000 UTC m=+0.089219274 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, config_id=tripleo_step4) Nov 23 03:21:28 localhost podman[80586]: 2025-11-23 08:21:28.969350568 +0000 UTC m=+0.145303537 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git) Nov 23 03:21:29 localhost podman[80585]: 2025-11-23 08:21:29.014012448 +0000 UTC m=+0.191428002 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12) Nov 23 03:21:29 localhost podman[80587]: 2025-11-23 08:21:29.073414703 +0000 UTC m=+0.243322494 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.12, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Nov 23 03:21:29 localhost podman[80584]: 2025-11-23 08:21:29.092877853 +0000 UTC m=+0.271526904 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:11:48Z) Nov 23 03:21:29 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:21:29 localhost podman[80587]: 2025-11-23 08:21:29.110355304 +0000 UTC m=+0.280263045 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.) Nov 23 03:21:29 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:21:29 localhost podman[80585]: 2025-11-23 08:21:29.144676943 +0000 UTC m=+0.322092507 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:21:29 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:21:29 localhost podman[80586]: 2025-11-23 08:21:29.341313116 +0000 UTC m=+0.517266125 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., batch=17.1_20251118.1) Nov 23 03:21:29 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:21:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:21:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:21:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:21:31 localhost systemd[1]: tmp-crun.wlxjcv.mount: Deactivated successfully. Nov 23 03:21:31 localhost podman[80681]: 2025-11-23 08:21:31.902805893 +0000 UTC m=+0.089983060 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, container_name=ovn_controller, architecture=x86_64, vcs-type=git) Nov 23 03:21:31 localhost systemd[1]: tmp-crun.rn5fcQ.mount: Deactivated successfully. Nov 23 03:21:31 localhost podman[80681]: 2025-11-23 08:21:31.95450444 +0000 UTC m=+0.141681587 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, distribution-scope=public, release=1761123044, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 23 03:21:31 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:21:32 localhost podman[80680]: 2025-11-23 08:21:31.95642773 +0000 UTC m=+0.144406031 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:21:32 localhost podman[80682]: 2025-11-23 08:21:32.013685718 +0000 UTC m=+0.195049255 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent) Nov 23 03:21:32 localhost podman[80682]: 2025-11-23 08:21:32.083840584 +0000 UTC m=+0.265204121 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z) Nov 23 03:21:32 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:21:32 localhost podman[80680]: 2025-11-23 08:21:32.150132682 +0000 UTC m=+0.338111003 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12) Nov 23 03:21:32 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:21:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:21:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:21:54 localhost systemd[1]: tmp-crun.IflLNt.mount: Deactivated successfully. Nov 23 03:21:54 localhost podman[80802]: 2025-11-23 08:21:54.910703493 +0000 UTC m=+0.100323479 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step3, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:21:54 localhost podman[80803]: 2025-11-23 08:21:54.962591175 +0000 UTC m=+0.147317351 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Nov 23 03:21:54 localhost podman[80802]: 2025-11-23 08:21:54.97343782 +0000 UTC m=+0.163057876 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Nov 23 03:21:54 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:21:54 localhost podman[80803]: 2025-11-23 08:21:54.994359856 +0000 UTC m=+0.179086002 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, architecture=x86_64, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4) Nov 23 03:21:55 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:21:55 localhost systemd[1]: tmp-crun.49MAy8.mount: Deactivated successfully. Nov 23 03:21:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:21:56 localhost podman[80845]: 2025-11-23 08:21:56.012626464 +0000 UTC m=+0.080769656 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, container_name=iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 23 03:21:56 localhost podman[80845]: 2025-11-23 08:21:56.046294013 +0000 UTC m=+0.114437195 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step3, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1) Nov 23 03:21:56 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:21:59 localhost podman[80872]: 2025-11-23 08:21:59.909055249 +0000 UTC m=+0.084212172 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, vcs-type=git, release=1761123044, config_id=tripleo_step4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:21:59 localhost podman[80872]: 2025-11-23 08:21:59.939699605 +0000 UTC m=+0.114856498 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, container_name=logrotate_crond, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12) Nov 23 03:21:59 localhost podman[80865]: 2025-11-23 08:21:59.946970389 +0000 UTC m=+0.128629453 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Nov 23 03:21:59 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:21:59 localhost podman[80865]: 2025-11-23 08:21:59.969864256 +0000 UTC m=+0.151523360 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, tcib_managed=true, container_name=ceilometer_agent_ipmi, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:21:59 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:22:00 localhost podman[80866]: 2025-11-23 08:22:00.011384609 +0000 UTC m=+0.188557454 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=nova_migration_target, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 23 03:22:00 localhost podman[80864]: 2025-11-23 08:22:00.07587968 +0000 UTC m=+0.261856947 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, architecture=x86_64) Nov 23 03:22:00 localhost podman[80864]: 2025-11-23 08:22:00.128946629 +0000 UTC m=+0.314923916 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4) Nov 23 03:22:00 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:22:00 localhost podman[80866]: 2025-11-23 08:22:00.420336528 +0000 UTC m=+0.597509423 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12) Nov 23 03:22:00 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:22:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 03:22:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 4944 writes, 22K keys, 4944 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4944 writes, 570 syncs, 8.67 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 03:22:02 localhost python3[80976]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 03:22:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:22:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:22:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:22:02 localhost podman[80979]: 2025-11-23 08:22:02.907779169 +0000 UTC m=+0.092124006 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=) Nov 23 03:22:02 localhost podman[80981]: 2025-11-23 08:22:02.968439913 +0000 UTC m=+0.146179365 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 23 03:22:03 localhost podman[80981]: 2025-11-23 08:22:03.019755218 +0000 UTC m=+0.197494740 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:22:03 localhost podman[80980]: 2025-11-23 08:22:03.019727807 +0000 UTC m=+0.199091750 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller) Nov 23 03:22:03 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:22:03 localhost podman[80980]: 2025-11-23 08:22:03.101835022 +0000 UTC m=+0.281198985 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Nov 23 03:22:03 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:22:03 localhost podman[80979]: 2025-11-23 08:22:03.165434557 +0000 UTC m=+0.349779384 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com) Nov 23 03:22:03 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:22:05 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 03:22:05 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 03:22:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 03:22:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 4667 writes, 21K keys, 4667 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4667 writes, 461 syncs, 10.12 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 03:22:16 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:22:16 localhost recover_tripleo_nova_virtqemud[81257]: 62093 Nov 23 03:22:16 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:22:16 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:22:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:22:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:22:25 localhost systemd[1]: tmp-crun.Pe84rl.mount: Deactivated successfully. Nov 23 03:22:25 localhost podman[81320]: 2025-11-23 08:22:25.953736516 +0000 UTC m=+0.138148834 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, config_id=tripleo_step3, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:22:25 localhost podman[81321]: 2025-11-23 08:22:25.920479262 +0000 UTC m=+0.104500747 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_compute, config_id=tripleo_step5, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64) Nov 23 03:22:25 localhost podman[81320]: 2025-11-23 08:22:25.997395381 +0000 UTC m=+0.181807699 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=) Nov 23 03:22:26 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:22:26 localhost podman[81321]: 2025-11-23 08:22:26.054794805 +0000 UTC m=+0.238816290 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:22:26 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:22:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:22:26 localhost podman[81366]: 2025-11-23 08:22:26.179696565 +0000 UTC m=+0.082270816 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, name=rhosp17/openstack-iscsid, release=1761123044, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=) Nov 23 03:22:26 localhost podman[81366]: 2025-11-23 08:22:26.189051847 +0000 UTC m=+0.091626128 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, tcib_managed=true, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:22:26 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:22:30 localhost podman[81385]: 2025-11-23 08:22:30.913390671 +0000 UTC m=+0.098326736 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4, release=1761123044, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:22:30 localhost podman[81389]: 2025-11-23 08:22:30.962524898 +0000 UTC m=+0.140229088 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:22:30 localhost podman[81385]: 2025-11-23 08:22:30.967553874 +0000 UTC m=+0.152489919 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:22:30 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:22:30 localhost podman[81389]: 2025-11-23 08:22:30.997749881 +0000 UTC m=+0.175454091 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:22:31 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:22:31 localhost podman[81386]: 2025-11-23 08:22:31.02632218 +0000 UTC m=+0.208714186 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:22:31 localhost podman[81387]: 2025-11-23 08:22:31.07397493 +0000 UTC m=+0.251203376 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Nov 23 03:22:31 localhost podman[81386]: 2025-11-23 08:22:31.081502544 +0000 UTC m=+0.263894510 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com) Nov 23 03:22:31 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:22:31 localhost podman[81387]: 2025-11-23 08:22:31.466193197 +0000 UTC m=+0.643421693 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, container_name=nova_migration_target) Nov 23 03:22:31 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:22:31 localhost systemd[1]: tmp-crun.K5nkBm.mount: Deactivated successfully. Nov 23 03:22:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:22:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:22:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:22:33 localhost podman[81477]: 2025-11-23 08:22:33.888320821 +0000 UTC m=+0.077558431 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git) Nov 23 03:22:33 localhost systemd[1]: tmp-crun.GKISSu.mount: Deactivated successfully. Nov 23 03:22:33 localhost podman[81478]: 2025-11-23 08:22:33.961045921 +0000 UTC m=+0.145492582 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Nov 23 03:22:33 localhost podman[81478]: 2025-11-23 08:22:33.990522987 +0000 UTC m=+0.174969698 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:22:34 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:22:34 localhost podman[81479]: 2025-11-23 08:22:34.000314081 +0000 UTC m=+0.181723988 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:22:34 localhost podman[81477]: 2025-11-23 08:22:34.083403753 +0000 UTC m=+0.272641363 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4) Nov 23 03:22:34 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:22:34 localhost podman[81479]: 2025-11-23 08:22:34.134223181 +0000 UTC m=+0.315633098 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, batch=17.1_20251118.1, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com) Nov 23 03:22:34 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:22:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:22:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:22:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:22:56 localhost podman[81598]: 2025-11-23 08:22:56.899587904 +0000 UTC m=+0.079767878 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 23 03:22:56 localhost podman[81598]: 2025-11-23 08:22:56.934386796 +0000 UTC m=+0.114566770 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044) Nov 23 03:22:56 localhost systemd[1]: tmp-crun.CMDNiG.mount: Deactivated successfully. Nov 23 03:22:56 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:22:56 localhost podman[81597]: 2025-11-23 08:22:56.967543307 +0000 UTC m=+0.151259151 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, build-date=2025-11-18T22:51:28Z) Nov 23 03:22:56 localhost podman[81597]: 2025-11-23 08:22:56.979374974 +0000 UTC m=+0.163090778 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, release=1761123044, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Nov 23 03:22:56 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:22:57 localhost podman[81599]: 2025-11-23 08:22:57.063530689 +0000 UTC m=+0.239484933 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, name=rhosp17/openstack-nova-compute, version=17.1.12, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:22:57 localhost podman[81599]: 2025-11-23 08:22:57.096235085 +0000 UTC m=+0.272189309 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute) Nov 23 03:22:57 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:22:58 localhost python3[81680]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 03:23:01 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 03:23:01 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 03:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:23:01 localhost podman[81807]: 2025-11-23 08:23:01.910381249 +0000 UTC m=+0.092567717 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_id=tripleo_step4) Nov 23 03:23:01 localhost podman[81806]: 2025-11-23 08:23:01.956183032 +0000 UTC m=+0.137237985 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:23:01 localhost podman[81807]: 2025-11-23 08:23:01.969565418 +0000 UTC m=+0.151751906 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true) Nov 23 03:23:01 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:23:02 localhost podman[81809]: 2025-11-23 08:23:02.015603899 +0000 UTC m=+0.196426755 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=logrotate_crond, distribution-scope=public, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=) Nov 23 03:23:02 localhost podman[81806]: 2025-11-23 08:23:02.018409276 +0000 UTC m=+0.199464259 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, architecture=x86_64, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:23:02 localhost podman[81809]: 2025-11-23 08:23:02.027344103 +0000 UTC m=+0.208167019 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, name=rhosp17/openstack-cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team) Nov 23 03:23:02 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:23:02 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:23:02 localhost podman[81808]: 2025-11-23 08:23:02.123322325 +0000 UTC m=+0.302438528 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4) Nov 23 03:23:02 localhost podman[81808]: 2025-11-23 08:23:02.521453384 +0000 UTC m=+0.700569557 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Nov 23 03:23:02 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:23:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:23:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:23:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:23:04 localhost podman[81907]: 2025-11-23 08:23:04.905457184 +0000 UTC m=+0.095051374 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, url=https://www.redhat.com, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public) Nov 23 03:23:04 localhost podman[81908]: 2025-11-23 08:23:04.965307494 +0000 UTC m=+0.148163694 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container) Nov 23 03:23:04 localhost podman[81909]: 2025-11-23 08:23:04.936359274 +0000 UTC m=+0.115493239 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:23:04 localhost podman[81908]: 2025-11-23 08:23:04.996436631 +0000 UTC m=+0.179292881 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 23 03:23:05 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:23:05 localhost podman[81909]: 2025-11-23 08:23:05.026698691 +0000 UTC m=+0.205832696 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Nov 23 03:23:05 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:23:05 localhost podman[81907]: 2025-11-23 08:23:05.141396385 +0000 UTC m=+0.330990535 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:23:05 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:23:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:23:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:23:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:23:27 localhost systemd[1]: tmp-crun.oiWErN.mount: Deactivated successfully. Nov 23 03:23:27 localhost podman[82169]: 2025-11-23 08:23:27.917641216 +0000 UTC m=+0.096958234 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public) Nov 23 03:23:27 localhost podman[82169]: 2025-11-23 08:23:27.949466074 +0000 UTC m=+0.128783092 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step5, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:23:27 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:23:28 localhost podman[82167]: 2025-11-23 08:23:28.007131436 +0000 UTC m=+0.186514616 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, release=1761123044, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, container_name=collectd, tcib_managed=true) Nov 23 03:23:28 localhost podman[82167]: 2025-11-23 08:23:28.039300455 +0000 UTC m=+0.218683635 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, build-date=2025-11-18T22:51:28Z, container_name=collectd, config_id=tripleo_step3, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:23:28 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:23:28 localhost podman[82168]: 2025-11-23 08:23:27.958454863 +0000 UTC m=+0.137679268 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., container_name=iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Nov 23 03:23:28 localhost podman[82168]: 2025-11-23 08:23:28.09157514 +0000 UTC m=+0.270799495 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:23:28 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:23:28 localhost systemd[1]: tmp-crun.n8fe5S.mount: Deactivated successfully. Nov 23 03:23:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:23:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:23:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:23:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:23:32 localhost podman[82233]: 2025-11-23 08:23:32.899845861 +0000 UTC m=+0.080521694 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=) Nov 23 03:23:32 localhost podman[82233]: 2025-11-23 08:23:32.952458105 +0000 UTC m=+0.133133888 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true) Nov 23 03:23:32 localhost podman[82232]: 2025-11-23 08:23:32.954468038 +0000 UTC m=+0.138287598 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:23:32 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:23:33 localhost systemd[1]: tmp-crun.wVoyF0.mount: Deactivated successfully. Nov 23 03:23:33 localhost podman[82234]: 2025-11-23 08:23:33.010012224 +0000 UTC m=+0.188546920 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Nov 23 03:23:33 localhost podman[82235]: 2025-11-23 08:23:33.057069425 +0000 UTC m=+0.232035610 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, url=https://www.redhat.com, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Nov 23 03:23:33 localhost podman[82235]: 2025-11-23 08:23:33.064857787 +0000 UTC m=+0.239824022 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:23:33 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:23:33 localhost podman[82232]: 2025-11-23 08:23:33.088697339 +0000 UTC m=+0.272516909 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1) Nov 23 03:23:33 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:23:33 localhost podman[82234]: 2025-11-23 08:23:33.404422007 +0000 UTC m=+0.582956653 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, tcib_managed=true) Nov 23 03:23:33 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:23:34 localhost python3[82337]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Nov 23 03:23:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:23:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:23:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:23:35 localhost systemd[1]: tmp-crun.p8vAOE.mount: Deactivated successfully. Nov 23 03:23:35 localhost podman[82339]: 2025-11-23 08:23:35.953319391 +0000 UTC m=+0.138973459 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Nov 23 03:23:36 localhost podman[82340]: 2025-11-23 08:23:36.000550038 +0000 UTC m=+0.182506501 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, batch=17.1_20251118.1, config_id=tripleo_step4) Nov 23 03:23:36 localhost podman[82338]: 2025-11-23 08:23:35.921191193 +0000 UTC m=+0.110902267 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, distribution-scope=public, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team) Nov 23 03:23:36 localhost podman[82339]: 2025-11-23 08:23:36.031879322 +0000 UTC m=+0.217533360 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, tcib_managed=true, name=rhosp17/openstack-ovn-controller) Nov 23 03:23:36 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:23:36 localhost podman[82340]: 2025-11-23 08:23:36.076442506 +0000 UTC m=+0.258398999 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:23:36 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:23:36 localhost podman[82338]: 2025-11-23 08:23:36.145226424 +0000 UTC m=+0.334937508 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, release=1761123044, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr) Nov 23 03:23:36 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:23:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:23:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:23:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:23:58 localhost podman[82460]: 2025-11-23 08:23:58.905042404 +0000 UTC m=+0.088264283 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=collectd, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:23:58 localhost podman[82460]: 2025-11-23 08:23:58.917334007 +0000 UTC m=+0.100555876 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.openshift.expose-services=) Nov 23 03:23:58 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:23:59 localhost podman[82462]: 2025-11-23 08:23:59.00853178 +0000 UTC m=+0.185172655 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step5, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, architecture=x86_64, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=) Nov 23 03:23:59 localhost podman[82461]: 2025-11-23 08:23:59.065633543 +0000 UTC m=+0.247943344 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:23:59 localhost podman[82462]: 2025-11-23 08:23:59.070468514 +0000 UTC m=+0.247109359 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, container_name=nova_compute, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:23:59 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:23:59 localhost podman[82461]: 2025-11-23 08:23:59.105438951 +0000 UTC m=+0.287748742 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:23:59 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:24:03 localhost podman[82526]: 2025-11-23 08:24:03.918388567 +0000 UTC m=+0.092335860 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:24:03 localhost podman[82523]: 2025-11-23 08:24:03.948058978 +0000 UTC m=+0.132739445 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4) Nov 23 03:24:03 localhost podman[82524]: 2025-11-23 08:24:03.961153055 +0000 UTC m=+0.142041264 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:24:03 localhost podman[82524]: 2025-11-23 08:24:03.992354974 +0000 UTC m=+0.173243183 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, distribution-scope=public, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container) Nov 23 03:24:04 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:24:04 localhost podman[82525]: 2025-11-23 08:24:04.011738147 +0000 UTC m=+0.190544011 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044) Nov 23 03:24:04 localhost podman[82526]: 2025-11-23 08:24:04.028161647 +0000 UTC m=+0.202108930 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com) Nov 23 03:24:04 localhost podman[82523]: 2025-11-23 08:24:04.035742563 +0000 UTC m=+0.220423080 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, distribution-scope=public, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:24:04 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:24:04 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:24:04 localhost podman[82525]: 2025-11-23 08:24:04.387547663 +0000 UTC m=+0.566353497 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git) Nov 23 03:24:04 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:24:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:24:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:24:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:24:06 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:24:06 localhost recover_tripleo_nova_virtqemud[82638]: 62093 Nov 23 03:24:06 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:24:06 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:24:06 localhost podman[82619]: 2025-11-23 08:24:06.902617665 +0000 UTC m=+0.087183470 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=) Nov 23 03:24:06 localhost podman[82620]: 2025-11-23 08:24:06.973833427 +0000 UTC m=+0.155766850 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, distribution-scope=public, container_name=ovn_controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Nov 23 03:24:06 localhost podman[82621]: 2025-11-23 08:24:06.930514441 +0000 UTC m=+0.108256895 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:24:07 localhost podman[82621]: 2025-11-23 08:24:07.0167295 +0000 UTC m=+0.194471994 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team) Nov 23 03:24:07 localhost podman[82620]: 2025-11-23 08:24:07.029364252 +0000 UTC m=+0.211297615 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, release=1761123044, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z) Nov 23 03:24:07 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:24:07 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:24:07 localhost podman[82619]: 2025-11-23 08:24:07.104585469 +0000 UTC m=+0.289151264 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, release=1761123044, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12) Nov 23 03:24:07 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:24:07 localhost systemd[1]: tmp-crun.mrLm0T.mount: Deactivated successfully. Nov 23 03:24:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:24:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:24:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:24:29 localhost podman[82773]: 2025-11-23 08:24:29.917559283 +0000 UTC m=+0.102181717 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, release=1761123044, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, container_name=collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:24:29 localhost podman[82773]: 2025-11-23 08:24:29.930170444 +0000 UTC m=+0.114792888 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=collectd, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-18T22:51:28Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:24:29 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:24:30 localhost podman[82775]: 2025-11-23 08:24:30.006914848 +0000 UTC m=+0.188072754 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_compute, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git) Nov 23 03:24:30 localhost podman[82775]: 2025-11-23 08:24:30.043406722 +0000 UTC m=+0.224564668 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.12, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Nov 23 03:24:30 localhost systemd[1]: tmp-crun.wEuKwL.mount: Deactivated successfully. Nov 23 03:24:30 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:24:30 localhost podman[82774]: 2025-11-23 08:24:30.06134943 +0000 UTC m=+0.243785716 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=) Nov 23 03:24:30 localhost podman[82774]: 2025-11-23 08:24:30.076428358 +0000 UTC m=+0.258864604 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Nov 23 03:24:30 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:24:34 localhost systemd[1]: session-34.scope: Deactivated successfully. Nov 23 03:24:34 localhost systemd[1]: session-34.scope: Consumed 19.232s CPU time. Nov 23 03:24:34 localhost systemd-logind[760]: Session 34 logged out. Waiting for processes to exit. Nov 23 03:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:24:34 localhost systemd-logind[760]: Removed session 34. Nov 23 03:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:24:34 localhost podman[82839]: 2025-11-23 08:24:34.516261532 +0000 UTC m=+0.101589437 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:24:34 localhost podman[82837]: 2025-11-23 08:24:34.491767721 +0000 UTC m=+0.087834720 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Nov 23 03:24:34 localhost podman[82839]: 2025-11-23 08:24:34.549400052 +0000 UTC m=+0.134727967 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, release=1761123044, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64) Nov 23 03:24:34 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:24:34 localhost podman[82881]: 2025-11-23 08:24:34.608958552 +0000 UTC m=+0.092785434 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:24:34 localhost podman[82837]: 2025-11-23 08:24:34.626068024 +0000 UTC m=+0.222135093 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, architecture=x86_64) Nov 23 03:24:34 localhost podman[82838]: 2025-11-23 08:24:34.637627553 +0000 UTC m=+0.231556765 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 23 03:24:34 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:24:34 localhost podman[82838]: 2025-11-23 08:24:34.665314183 +0000 UTC m=+0.259243405 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Nov 23 03:24:34 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:24:35 localhost podman[82881]: 2025-11-23 08:24:35.012107258 +0000 UTC m=+0.495934080 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, distribution-scope=public) Nov 23 03:24:35 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:24:35 localhost systemd[1]: tmp-crun.pZwwLM.mount: Deactivated successfully. Nov 23 03:24:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:24:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:24:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:24:37 localhost systemd[1]: tmp-crun.TWP6ny.mount: Deactivated successfully. Nov 23 03:24:37 localhost podman[82932]: 2025-11-23 08:24:37.913627918 +0000 UTC m=+0.096384496 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:24:37 localhost podman[82932]: 2025-11-23 08:24:37.968278975 +0000 UTC m=+0.151035563 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, release=1761123044) Nov 23 03:24:37 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:24:38 localhost podman[82933]: 2025-11-23 08:24:38.013618835 +0000 UTC m=+0.193794633 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, build-date=2025-11-19T00:14:25Z, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=) Nov 23 03:24:38 localhost podman[82931]: 2025-11-23 08:24:37.968131421 +0000 UTC m=+0.151914481 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, config_id=tripleo_step1, url=https://www.redhat.com, architecture=x86_64, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:24:38 localhost podman[82933]: 2025-11-23 08:24:38.086410745 +0000 UTC m=+0.266586563 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:24:38 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:24:38 localhost podman[82931]: 2025-11-23 08:24:38.189477738 +0000 UTC m=+0.373260788 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z) Nov 23 03:24:38 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:25:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:25:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:25:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:25:00 localhost podman[83052]: 2025-11-23 08:25:00.908623703 +0000 UTC m=+0.092218927 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, version=17.1.12, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:25:00 localhost podman[83053]: 2025-11-23 08:25:00.956864931 +0000 UTC m=+0.137296347 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, container_name=iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team) Nov 23 03:25:00 localhost podman[83053]: 2025-11-23 08:25:00.970253767 +0000 UTC m=+0.150685203 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3) Nov 23 03:25:00 localhost podman[83052]: 2025-11-23 08:25:00.974939823 +0000 UTC m=+0.158535027 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, container_name=collectd, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step3) Nov 23 03:25:00 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:25:01 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:25:01 localhost podman[83054]: 2025-11-23 08:25:01.068386616 +0000 UTC m=+0.243807556 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Nov 23 03:25:01 localhost podman[83054]: 2025-11-23 08:25:01.116695118 +0000 UTC m=+0.292116068 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1) Nov 23 03:25:01 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:25:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:25:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:25:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:25:04 localhost systemd[1]: tmp-crun.LFPj9q.mount: Deactivated successfully. Nov 23 03:25:04 localhost podman[83118]: 2025-11-23 08:25:04.913809773 +0000 UTC m=+0.101186925 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, tcib_managed=true, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:25:04 localhost podman[83119]: 2025-11-23 08:25:04.967091758 +0000 UTC m=+0.149346161 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:25:05 localhost podman[83120]: 2025-11-23 08:25:05.005972256 +0000 UTC m=+0.186730163 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:25:05 localhost podman[83120]: 2025-11-23 08:25:05.018415602 +0000 UTC m=+0.199173519 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-cron-container) Nov 23 03:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:25:05 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:25:05 localhost podman[83118]: 2025-11-23 08:25:05.073192295 +0000 UTC m=+0.260569447 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1) Nov 23 03:25:05 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:25:05 localhost podman[83119]: 2025-11-23 08:25:05.102520805 +0000 UTC m=+0.284775228 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi) Nov 23 03:25:05 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:25:05 localhost podman[83191]: 2025-11-23 08:25:05.200457148 +0000 UTC m=+0.137997078 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, container_name=nova_migration_target, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:25:05 localhost podman[83191]: 2025-11-23 08:25:05.572483177 +0000 UTC m=+0.510023077 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, container_name=nova_migration_target, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, tcib_managed=true) Nov 23 03:25:05 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:25:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:25:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:25:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:25:09 localhost systemd[1]: tmp-crun.8GQQFe.mount: Deactivated successfully. Nov 23 03:25:09 localhost podman[83214]: 2025-11-23 08:25:09.036617467 +0000 UTC m=+0.092379902 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=metrics_qdr, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:25:09 localhost systemd[1]: tmp-crun.LYsFvt.mount: Deactivated successfully. Nov 23 03:25:09 localhost podman[83215]: 2025-11-23 08:25:09.088099346 +0000 UTC m=+0.140048783 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, distribution-scope=public, release=1761123044, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:25:09 localhost podman[83216]: 2025-11-23 08:25:09.131358199 +0000 UTC m=+0.180887170 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Nov 23 03:25:09 localhost podman[83215]: 2025-11-23 08:25:09.142417973 +0000 UTC m=+0.194367380 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:25:09 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:25:09 localhost podman[83216]: 2025-11-23 08:25:09.176331848 +0000 UTC m=+0.225860869 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:25:09 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:25:09 localhost podman[83214]: 2025-11-23 08:25:09.277931434 +0000 UTC m=+0.333693879 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step1, maintainer=OpenStack TripleO Team) Nov 23 03:25:09 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:25:23 localhost podman[83387]: 2025-11-23 08:25:23.539893688 +0000 UTC m=+0.111622999 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-type=git, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, version=7, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container) Nov 23 03:25:23 localhost podman[83387]: 2025-11-23 08:25:23.640578596 +0000 UTC m=+0.212307857 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, architecture=x86_64, version=7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, RELEASE=main, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 03:25:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:25:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:25:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:25:31 localhost systemd[1]: tmp-crun.JYA67d.mount: Deactivated successfully. Nov 23 03:25:31 localhost podman[83529]: 2025-11-23 08:25:31.958906433 +0000 UTC m=+0.139466054 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container) Nov 23 03:25:31 localhost podman[83529]: 2025-11-23 08:25:31.972292169 +0000 UTC m=+0.152851830 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:25:31 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:25:32 localhost podman[83530]: 2025-11-23 08:25:32.013713746 +0000 UTC m=+0.194634698 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 23 03:25:32 localhost podman[83530]: 2025-11-23 08:25:32.02830935 +0000 UTC m=+0.209230252 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, container_name=iscsid, release=1761123044, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com) Nov 23 03:25:32 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:25:32 localhost podman[83531]: 2025-11-23 08:25:31.928058535 +0000 UTC m=+0.108538964 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step5, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute) Nov 23 03:25:32 localhost podman[83531]: 2025-11-23 08:25:32.111532256 +0000 UTC m=+0.292012635 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step5) Nov 23 03:25:32 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:25:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:25:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:25:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:25:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:25:35 localhost systemd[1]: tmp-crun.Ao8KBb.mount: Deactivated successfully. Nov 23 03:25:35 localhost podman[83590]: 2025-11-23 08:25:35.9205793 +0000 UTC m=+0.102874887 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:25:35 localhost systemd[1]: tmp-crun.8KapuA.mount: Deactivated successfully. Nov 23 03:25:35 localhost podman[83591]: 2025-11-23 08:25:35.964039291 +0000 UTC m=+0.142709105 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:25:36 localhost podman[83593]: 2025-11-23 08:25:36.015183299 +0000 UTC m=+0.189574591 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Nov 23 03:25:36 localhost podman[83591]: 2025-11-23 08:25:36.023447216 +0000 UTC m=+0.202116980 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Nov 23 03:25:36 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:25:36 localhost podman[83592]: 2025-11-23 08:25:36.059439665 +0000 UTC m=+0.236142938 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:25:36 localhost podman[83593]: 2025-11-23 08:25:36.077871407 +0000 UTC m=+0.252262749 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, release=1761123044, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container) Nov 23 03:25:36 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:25:36 localhost podman[83590]: 2025-11-23 08:25:36.130289786 +0000 UTC m=+0.312585413 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, version=17.1.12, url=https://www.redhat.com, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:25:36 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:25:36 localhost podman[83592]: 2025-11-23 08:25:36.459375791 +0000 UTC m=+0.636079094 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-nova-compute, tcib_managed=true, distribution-scope=public, container_name=nova_migration_target, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Nov 23 03:25:36 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:25:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:25:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:25:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:25:39 localhost systemd[1]: tmp-crun.jAMGfQ.mount: Deactivated successfully. Nov 23 03:25:39 localhost podman[83686]: 2025-11-23 08:25:39.917372639 +0000 UTC m=+0.097549602 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr) Nov 23 03:25:39 localhost systemd[1]: tmp-crun.zEgiE2.mount: Deactivated successfully. Nov 23 03:25:39 localhost podman[83687]: 2025-11-23 08:25:39.974499953 +0000 UTC m=+0.152860740 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, release=1761123044) Nov 23 03:25:40 localhost podman[83687]: 2025-11-23 08:25:40.032526716 +0000 UTC m=+0.210887503 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true) Nov 23 03:25:40 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:25:40 localhost podman[83688]: 2025-11-23 08:25:40.033173146 +0000 UTC m=+0.206853318 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Nov 23 03:25:40 localhost podman[83688]: 2025-11-23 08:25:40.117469585 +0000 UTC m=+0.291149767 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12) Nov 23 03:25:40 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:25:40 localhost podman[83686]: 2025-11-23 08:25:40.135446784 +0000 UTC m=+0.315623797 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044) Nov 23 03:25:40 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:25:41 localhost rhsm-service[6584]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 23 03:25:58 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:25:58 localhost recover_tripleo_nova_virtqemud[83984]: 62093 Nov 23 03:25:58 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:25:58 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:26:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:26:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:26:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:26:02 localhost systemd[1]: tmp-crun.3B4Tq1.mount: Deactivated successfully. Nov 23 03:26:02 localhost podman[83985]: 2025-11-23 08:26:02.904005904 +0000 UTC m=+0.091617317 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, name=rhosp17/openstack-collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4) Nov 23 03:26:02 localhost podman[83987]: 2025-11-23 08:26:02.965645149 +0000 UTC m=+0.148261697 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, release=1761123044, architecture=x86_64, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container) Nov 23 03:26:03 localhost podman[83986]: 2025-11-23 08:26:03.003118344 +0000 UTC m=+0.188042833 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-iscsid, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, release=1761123044, version=17.1.12, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4) Nov 23 03:26:03 localhost podman[83986]: 2025-11-23 08:26:03.038681738 +0000 UTC m=+0.223606197 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public) Nov 23 03:26:03 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:26:03 localhost podman[83987]: 2025-11-23 08:26:03.051548218 +0000 UTC m=+0.234164796 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, distribution-scope=public, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, version=17.1.12) Nov 23 03:26:03 localhost podman[83985]: 2025-11-23 08:26:03.07091028 +0000 UTC m=+0.258521713 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc.) Nov 23 03:26:03 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:26:03 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:26:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:26:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:26:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:26:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:26:06 localhost systemd[1]: tmp-crun.iLkZlM.mount: Deactivated successfully. Nov 23 03:26:06 localhost podman[84052]: 2025-11-23 08:26:06.89297578 +0000 UTC m=+0.079826991 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:26:06 localhost podman[84059]: 2025-11-23 08:26:06.910124373 +0000 UTC m=+0.085804147 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, container_name=logrotate_crond, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4) Nov 23 03:26:06 localhost podman[84051]: 2025-11-23 08:26:06.953528262 +0000 UTC m=+0.138397571 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git) Nov 23 03:26:06 localhost podman[84052]: 2025-11-23 08:26:06.973614076 +0000 UTC m=+0.160465327 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, container_name=ceilometer_agent_ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:26:06 localhost podman[84051]: 2025-11-23 08:26:06.984160804 +0000 UTC m=+0.169030093 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:26:06 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:26:06 localhost podman[84059]: 2025-11-23 08:26:06.998388606 +0000 UTC m=+0.174068400 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:26:07 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:26:07 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:26:07 localhost podman[84053]: 2025-11-23 08:26:07.062193498 +0000 UTC m=+0.241276587 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=nova_migration_target, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:26:07 localhost podman[84053]: 2025-11-23 08:26:07.469503043 +0000 UTC m=+0.648586162 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:26:07 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:26:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:26:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:26:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:26:10 localhost podman[84151]: 2025-11-23 08:26:10.908933665 +0000 UTC m=+0.090893665 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:26:10 localhost systemd[1]: tmp-crun.B4cUrt.mount: Deactivated successfully. Nov 23 03:26:10 localhost podman[84152]: 2025-11-23 08:26:10.962617463 +0000 UTC m=+0.143821489 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., release=1761123044) Nov 23 03:26:10 localhost podman[84152]: 2025-11-23 08:26:10.997516697 +0000 UTC m=+0.178720753 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller) Nov 23 03:26:11 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:26:11 localhost podman[84153]: 2025-11-23 08:26:11.016880219 +0000 UTC m=+0.193472282 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Nov 23 03:26:11 localhost podman[84153]: 2025-11-23 08:26:11.088465253 +0000 UTC m=+0.265057336 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true) Nov 23 03:26:11 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:26:11 localhost podman[84151]: 2025-11-23 08:26:11.121534361 +0000 UTC m=+0.303494351 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 23 03:26:11 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:26:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:26:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:26:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:26:33 localhost podman[84304]: 2025-11-23 08:26:33.909210106 +0000 UTC m=+0.087186579 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=collectd, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git) Nov 23 03:26:33 localhost podman[84304]: 2025-11-23 08:26:33.917556556 +0000 UTC m=+0.095532979 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, tcib_managed=true, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step3) Nov 23 03:26:33 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:26:34 localhost systemd[1]: tmp-crun.iZdkd5.mount: Deactivated successfully. Nov 23 03:26:34 localhost podman[84305]: 2025-11-23 08:26:34.013714623 +0000 UTC m=+0.189178868 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1) Nov 23 03:26:34 localhost podman[84305]: 2025-11-23 08:26:34.023528188 +0000 UTC m=+0.198992383 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step3, container_name=iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 23 03:26:34 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:26:34 localhost podman[84306]: 2025-11-23 08:26:34.113410851 +0000 UTC m=+0.285218743 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12) Nov 23 03:26:34 localhost podman[84306]: 2025-11-23 08:26:34.171814326 +0000 UTC m=+0.343622238 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, container_name=nova_compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:26:34 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:26:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:26:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:26:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:26:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:26:37 localhost podman[84367]: 2025-11-23 08:26:37.908918056 +0000 UTC m=+0.083744613 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:26:37 localhost podman[84368]: 2025-11-23 08:26:37.961210201 +0000 UTC m=+0.134557821 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Nov 23 03:26:37 localhost podman[84367]: 2025-11-23 08:26:37.965460363 +0000 UTC m=+0.140286880 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, io.openshift.expose-services=) Nov 23 03:26:37 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:26:38 localhost podman[84369]: 2025-11-23 08:26:38.020939477 +0000 UTC m=+0.191628565 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:26:38 localhost podman[84370]: 2025-11-23 08:26:38.068227686 +0000 UTC m=+0.236933593 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=logrotate_crond, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:26:38 localhost podman[84370]: 2025-11-23 08:26:38.07928568 +0000 UTC m=+0.247991577 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.12, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com) Nov 23 03:26:38 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:26:38 localhost podman[84368]: 2025-11-23 08:26:38.095778882 +0000 UTC m=+0.269126482 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=) Nov 23 03:26:38 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:26:38 localhost podman[84369]: 2025-11-23 08:26:38.417455806 +0000 UTC m=+0.588144904 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-nova-compute) Nov 23 03:26:38 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:26:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:26:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:26:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:26:41 localhost podman[84461]: 2025-11-23 08:26:41.895882011 +0000 UTC m=+0.084373343 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr) Nov 23 03:26:41 localhost systemd[1]: tmp-crun.hD1nOW.mount: Deactivated successfully. Nov 23 03:26:41 localhost podman[84462]: 2025-11-23 08:26:41.957883197 +0000 UTC m=+0.143247072 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, config_id=tripleo_step4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:26:42 localhost podman[84462]: 2025-11-23 08:26:42.008350635 +0000 UTC m=+0.193714500 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z) Nov 23 03:26:42 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:26:42 localhost podman[84461]: 2025-11-23 08:26:42.095359178 +0000 UTC m=+0.283850480 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z) Nov 23 03:26:42 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:26:42 localhost podman[84463]: 2025-11-23 08:26:42.105745601 +0000 UTC m=+0.288349640 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Nov 23 03:26:42 localhost podman[84463]: 2025-11-23 08:26:42.182420803 +0000 UTC m=+0.365024842 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public) Nov 23 03:26:42 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:27:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:27:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:27:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:27:04 localhost podman[84581]: 2025-11-23 08:27:04.909296141 +0000 UTC m=+0.092413753 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:27:04 localhost systemd[1]: tmp-crun.2uXay5.mount: Deactivated successfully. Nov 23 03:27:04 localhost podman[84582]: 2025-11-23 08:27:04.957648843 +0000 UTC m=+0.138017369 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, config_id=tripleo_step3, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Nov 23 03:27:04 localhost podman[84581]: 2025-11-23 08:27:04.975147867 +0000 UTC m=+0.158265519 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=collectd, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step3, architecture=x86_64) Nov 23 03:27:04 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:27:04 localhost podman[84582]: 2025-11-23 08:27:04.995310653 +0000 UTC m=+0.175679229 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, version=17.1.12, container_name=iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:27:05 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:27:05 localhost podman[84583]: 2025-11-23 08:27:05.068237229 +0000 UTC m=+0.241819974 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, container_name=nova_compute, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container) Nov 23 03:27:05 localhost podman[84583]: 2025-11-23 08:27:05.125724575 +0000 UTC m=+0.299307280 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute) Nov 23 03:27:05 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:27:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:27:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:27:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:27:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:27:08 localhost systemd[1]: tmp-crun.dWG7Cc.mount: Deactivated successfully. Nov 23 03:27:08 localhost podman[84643]: 2025-11-23 08:27:08.916447561 +0000 UTC m=+0.100419651 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 23 03:27:08 localhost podman[84643]: 2025-11-23 08:27:08.944656237 +0000 UTC m=+0.128628297 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Nov 23 03:27:08 localhost podman[84646]: 2025-11-23 08:27:08.955611288 +0000 UTC m=+0.131803177 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond) Nov 23 03:27:08 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:27:08 localhost podman[84646]: 2025-11-23 08:27:08.968384454 +0000 UTC m=+0.144576383 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:27:08 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:27:09 localhost podman[84644]: 2025-11-23 08:27:09.01458038 +0000 UTC m=+0.195186276 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Nov 23 03:27:09 localhost podman[84645]: 2025-11-23 08:27:09.066975727 +0000 UTC m=+0.243202737 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-type=git, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:27:09 localhost podman[84644]: 2025-11-23 08:27:09.095336858 +0000 UTC m=+0.275942684 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc.) Nov 23 03:27:09 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:27:09 localhost podman[84645]: 2025-11-23 08:27:09.427690294 +0000 UTC m=+0.603917344 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Nov 23 03:27:09 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:27:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:27:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:27:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:27:12 localhost podman[84737]: 2025-11-23 08:27:12.890697239 +0000 UTC m=+0.079565993 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, container_name=metrics_qdr, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:27:12 localhost systemd[1]: tmp-crun.M1owXV.mount: Deactivated successfully. Nov 23 03:27:12 localhost podman[84739]: 2025-11-23 08:27:12.948288159 +0000 UTC m=+0.132552620 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn) Nov 23 03:27:12 localhost podman[84739]: 2025-11-23 08:27:12.989294652 +0000 UTC m=+0.173559113 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:27:13 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:27:13 localhost podman[84738]: 2025-11-23 08:27:13.007004393 +0000 UTC m=+0.193493373 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Nov 23 03:27:13 localhost podman[84738]: 2025-11-23 08:27:13.029350877 +0000 UTC m=+0.215839857 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, tcib_managed=true, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:27:13 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:27:13 localhost podman[84737]: 2025-11-23 08:27:13.091600811 +0000 UTC m=+0.280469635 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12) Nov 23 03:27:13 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:27:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:27:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:27:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:27:35 localhost podman[84892]: 2025-11-23 08:27:35.905168353 +0000 UTC m=+0.093077104 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, vcs-type=git, build-date=2025-11-18T22:51:28Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, url=https://www.redhat.com, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4) Nov 23 03:27:35 localhost podman[84892]: 2025-11-23 08:27:35.916105692 +0000 UTC m=+0.104014503 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:27:35 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:27:36 localhost podman[84893]: 2025-11-23 08:27:36.003919911 +0000 UTC m=+0.185893808 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044) Nov 23 03:27:36 localhost podman[84893]: 2025-11-23 08:27:36.044412508 +0000 UTC m=+0.226386355 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:27:36 localhost podman[84894]: 2025-11-23 08:27:36.056744332 +0000 UTC m=+0.237067567 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, tcib_managed=true, version=17.1.12, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1) Nov 23 03:27:36 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:27:36 localhost podman[84894]: 2025-11-23 08:27:36.115403645 +0000 UTC m=+0.295726870 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:27:36 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:27:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:27:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:27:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:27:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:27:39 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:27:39 localhost recover_tripleo_nova_virtqemud[84982]: 62093 Nov 23 03:27:39 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:27:39 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:27:39 localhost podman[84959]: 2025-11-23 08:27:39.90865766 +0000 UTC m=+0.092269038 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step4, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Nov 23 03:27:39 localhost podman[84957]: 2025-11-23 08:27:39.883841539 +0000 UTC m=+0.073105893 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=ceilometer_agent_compute, vcs-type=git, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:27:39 localhost podman[84958]: 2025-11-23 08:27:39.946218337 +0000 UTC m=+0.131652022 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:27:40 localhost podman[84960]: 2025-11-23 08:27:40.00749349 +0000 UTC m=+0.191109088 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1) Nov 23 03:27:40 localhost podman[84957]: 2025-11-23 08:27:40.020243436 +0000 UTC m=+0.209507770 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044) Nov 23 03:27:40 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:27:40 localhost podman[84960]: 2025-11-23 08:27:40.043408187 +0000 UTC m=+0.227023775 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:27:40 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:27:40 localhost podman[84958]: 2025-11-23 08:27:40.075460762 +0000 UTC m=+0.260894507 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4) Nov 23 03:27:40 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:27:40 localhost podman[84959]: 2025-11-23 08:27:40.290410521 +0000 UTC m=+0.474021879 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc.) Nov 23 03:27:40 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:27:42 localhost sshd[85056]: main: sshd: ssh-rsa algorithm is disabled Nov 23 03:27:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:27:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:27:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:27:43 localhost systemd[1]: tmp-crun.xYb7Jx.mount: Deactivated successfully. Nov 23 03:27:43 localhost podman[85058]: 2025-11-23 08:27:43.906639016 +0000 UTC m=+0.089294835 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Nov 23 03:27:43 localhost systemd[1]: tmp-crun.JA9WHx.mount: Deactivated successfully. Nov 23 03:27:43 localhost podman[85059]: 2025-11-23 08:27:43.955123162 +0000 UTC m=+0.137759580 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=ovn_controller, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64) Nov 23 03:27:44 localhost podman[85060]: 2025-11-23 08:27:44.018206993 +0000 UTC m=+0.197670853 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:27:44 localhost podman[85059]: 2025-11-23 08:27:44.032970971 +0000 UTC m=+0.215607419 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4) Nov 23 03:27:44 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:27:44 localhost podman[85060]: 2025-11-23 08:27:44.100160509 +0000 UTC m=+0.279624359 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, container_name=ovn_metadata_agent, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com) Nov 23 03:27:44 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:27:44 localhost podman[85058]: 2025-11-23 08:27:44.130422729 +0000 UTC m=+0.313078468 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, release=1761123044, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12) Nov 23 03:27:44 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:27:44 localhost systemd[1]: tmp-crun.mMNDvW.mount: Deactivated successfully. Nov 23 03:28:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:28:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:28:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:28:06 localhost podman[85183]: 2025-11-23 08:28:06.911752815 +0000 UTC m=+0.093659512 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=) Nov 23 03:28:06 localhost podman[85183]: 2025-11-23 08:28:06.946626048 +0000 UTC m=+0.128532765 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container) Nov 23 03:28:06 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:28:06 localhost podman[85182]: 2025-11-23 08:28:06.960496669 +0000 UTC m=+0.144997706 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, version=17.1.12, container_name=collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Nov 23 03:28:06 localhost podman[85182]: 2025-11-23 08:28:06.99625667 +0000 UTC m=+0.180757677 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Nov 23 03:28:07 localhost systemd[1]: tmp-crun.JngSku.mount: Deactivated successfully. Nov 23 03:28:07 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:28:07 localhost podman[85184]: 2025-11-23 08:28:07.015355754 +0000 UTC m=+0.195563848 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, container_name=nova_compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1) Nov 23 03:28:07 localhost podman[85184]: 2025-11-23 08:28:07.047409919 +0000 UTC m=+0.227617983 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=nova_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64) Nov 23 03:28:07 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:28:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:28:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:28:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:28:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:28:10 localhost systemd[1]: tmp-crun.MippTO.mount: Deactivated successfully. Nov 23 03:28:10 localhost podman[85248]: 2025-11-23 08:28:10.905408925 +0000 UTC m=+0.090377979 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:28:10 localhost podman[85249]: 2025-11-23 08:28:10.962352275 +0000 UTC m=+0.144774120 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Nov 23 03:28:10 localhost podman[85249]: 2025-11-23 08:28:10.999694385 +0000 UTC m=+0.182116280 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Nov 23 03:28:11 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:28:11 localhost podman[85251]: 2025-11-23 08:28:11.032085361 +0000 UTC m=+0.206663362 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-cron-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-cron, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:28:11 localhost podman[85248]: 2025-11-23 08:28:11.039828061 +0000 UTC m=+0.224797105 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z) Nov 23 03:28:11 localhost podman[85250]: 2025-11-23 08:28:10.939170924 +0000 UTC m=+0.112134815 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, release=1761123044) Nov 23 03:28:11 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:28:11 localhost podman[85251]: 2025-11-23 08:28:11.0683952 +0000 UTC m=+0.242973231 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Nov 23 03:28:11 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:28:11 localhost podman[85250]: 2025-11-23 08:28:11.327614213 +0000 UTC m=+0.500578044 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target) Nov 23 03:28:11 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:28:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:28:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:28:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:28:14 localhost podman[85339]: 2025-11-23 08:28:14.886353202 +0000 UTC m=+0.072090830 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12) Nov 23 03:28:14 localhost podman[85339]: 2025-11-23 08:28:14.913282159 +0000 UTC m=+0.099019757 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container) Nov 23 03:28:14 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:28:15 localhost podman[85338]: 2025-11-23 08:28:15.003362757 +0000 UTC m=+0.191870932 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_id=tripleo_step1) Nov 23 03:28:15 localhost podman[85340]: 2025-11-23 08:28:15.048041156 +0000 UTC m=+0.228613644 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vendor=Red Hat, Inc.) Nov 23 03:28:15 localhost podman[85340]: 2025-11-23 08:28:15.089021579 +0000 UTC m=+0.269594087 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc.) Nov 23 03:28:15 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:28:15 localhost podman[85338]: 2025-11-23 08:28:15.200283786 +0000 UTC m=+0.388791941 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, release=1761123044, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1) Nov 23 03:28:15 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:28:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:28:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:28:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:28:37 localhost podman[85493]: 2025-11-23 08:28:37.906713877 +0000 UTC m=+0.092081212 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3) Nov 23 03:28:37 localhost podman[85493]: 2025-11-23 08:28:37.922033023 +0000 UTC m=+0.107400348 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64) Nov 23 03:28:37 localhost podman[85495]: 2025-11-23 08:28:37.967605239 +0000 UTC m=+0.146779762 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, config_id=tripleo_step5, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, release=1761123044) Nov 23 03:28:38 localhost podman[85495]: 2025-11-23 08:28:38.001294006 +0000 UTC m=+0.180468569 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, managed_by=tripleo_ansible, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:28:38 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:28:38 localhost podman[85494]: 2025-11-23 08:28:38.016506148 +0000 UTC m=+0.197658911 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container) Nov 23 03:28:38 localhost podman[85494]: 2025-11-23 08:28:38.030328988 +0000 UTC m=+0.211481751 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, release=1761123044, vcs-type=git, config_id=tripleo_step3, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 23 03:28:38 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:28:38 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:28:38 localhost systemd[1]: tmp-crun.zuhv4d.mount: Deactivated successfully. Nov 23 03:28:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:28:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:28:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:28:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:28:41 localhost systemd[1]: tmp-crun.X3Qrna.mount: Deactivated successfully. Nov 23 03:28:41 localhost podman[85559]: 2025-11-23 08:28:41.884501106 +0000 UTC m=+0.068547661 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, tcib_managed=true) Nov 23 03:28:41 localhost podman[85557]: 2025-11-23 08:28:41.942013473 +0000 UTC m=+0.129727452 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Nov 23 03:28:41 localhost podman[85558]: 2025-11-23 08:28:41.91006895 +0000 UTC m=+0.092366880 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., release=1761123044) Nov 23 03:28:41 localhost podman[85557]: 2025-11-23 08:28:41.964900214 +0000 UTC m=+0.152614153 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, version=17.1.12) Nov 23 03:28:41 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:28:42 localhost podman[85560]: 2025-11-23 08:28:42.013856135 +0000 UTC m=+0.192498782 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, version=17.1.12, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:28:42 localhost podman[85558]: 2025-11-23 08:28:42.03945197 +0000 UTC m=+0.221749850 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.expose-services=) Nov 23 03:28:42 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:28:42 localhost podman[85560]: 2025-11-23 08:28:42.055344474 +0000 UTC m=+0.233987151 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, name=rhosp17/openstack-cron, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container) Nov 23 03:28:42 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:28:42 localhost podman[85559]: 2025-11-23 08:28:42.280454438 +0000 UTC m=+0.464501063 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, version=17.1.12, url=https://www.redhat.com) Nov 23 03:28:42 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:28:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:28:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:28:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:28:45 localhost podman[85646]: 2025-11-23 08:28:45.904316551 +0000 UTC m=+0.090619507 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:28:45 localhost systemd[1]: tmp-crun.L16lTZ.mount: Deactivated successfully. Nov 23 03:28:45 localhost podman[85648]: 2025-11-23 08:28:45.969271889 +0000 UTC m=+0.147919567 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1) Nov 23 03:28:46 localhost podman[85647]: 2025-11-23 08:28:46.016655872 +0000 UTC m=+0.198050045 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:28:46 localhost podman[85647]: 2025-11-23 08:28:46.042523875 +0000 UTC m=+0.223918038 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, container_name=ovn_controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:28:46 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:28:46 localhost podman[85648]: 2025-11-23 08:28:46.054373913 +0000 UTC m=+0.233021551 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, distribution-scope=public, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, version=17.1.12, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Nov 23 03:28:46 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:28:46 localhost podman[85646]: 2025-11-23 08:28:46.083571471 +0000 UTC m=+0.269874377 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, container_name=metrics_qdr, config_id=tripleo_step1) Nov 23 03:28:46 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:28:46 localhost systemd[1]: tmp-crun.CB727v.mount: Deactivated successfully. Nov 23 03:29:01 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:29:01 localhost recover_tripleo_nova_virtqemud[85767]: 62093 Nov 23 03:29:01 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:29:01 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:29:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:29:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:29:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:29:09 localhost podman[85770]: 2025-11-23 08:29:09.097626361 +0000 UTC m=+0.090429530 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:29:09 localhost systemd[1]: tmp-crun.WEgt7U.mount: Deactivated successfully. Nov 23 03:29:09 localhost podman[85768]: 2025-11-23 08:29:09.142889297 +0000 UTC m=+0.140738643 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step3, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12) Nov 23 03:29:09 localhost podman[85770]: 2025-11-23 08:29:09.158675328 +0000 UTC m=+0.151478457 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step5) Nov 23 03:29:09 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:29:09 localhost podman[85768]: 2025-11-23 08:29:09.177224234 +0000 UTC m=+0.175073530 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4) Nov 23 03:29:09 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:29:09 localhost podman[85769]: 2025-11-23 08:29:09.258963844 +0000 UTC m=+0.252997812 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true) Nov 23 03:29:09 localhost podman[85769]: 2025-11-23 08:29:09.29747479 +0000 UTC m=+0.291508718 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=iscsid, io.openshift.expose-services=, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1) Nov 23 03:29:09 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:29:10 localhost systemd[1]: tmp-crun.VbCFrM.mount: Deactivated successfully. Nov 23 03:29:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:29:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:29:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:29:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:29:12 localhost podman[85831]: 2025-11-23 08:29:12.908196484 +0000 UTC m=+0.092113053 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, distribution-scope=public, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible) Nov 23 03:29:12 localhost systemd[1]: tmp-crun.KWXoEA.mount: Deactivated successfully. Nov 23 03:29:12 localhost podman[85834]: 2025-11-23 08:29:12.968719505 +0000 UTC m=+0.140579169 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=logrotate_crond, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z) Nov 23 03:29:12 localhost podman[85834]: 2025-11-23 08:29:12.976463176 +0000 UTC m=+0.148322840 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:29:13 localhost podman[85832]: 2025-11-23 08:29:13.01395975 +0000 UTC m=+0.191644515 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Nov 23 03:29:13 localhost podman[85831]: 2025-11-23 08:29:13.03906214 +0000 UTC m=+0.222978719 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:11:48Z, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc.) Nov 23 03:29:13 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:29:13 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:29:13 localhost podman[85832]: 2025-11-23 08:29:13.070621201 +0000 UTC m=+0.248305956 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:29:13 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:29:13 localhost podman[85833]: 2025-11-23 08:29:13.104933456 +0000 UTC m=+0.281248668 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vcs-type=git, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible) Nov 23 03:29:13 localhost podman[85833]: 2025-11-23 08:29:13.480513176 +0000 UTC m=+0.656828438 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z) Nov 23 03:29:13 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:29:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:29:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:29:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:29:16 localhost systemd[1]: tmp-crun.TaCfiG.mount: Deactivated successfully. Nov 23 03:29:16 localhost podman[85926]: 2025-11-23 08:29:16.907536792 +0000 UTC m=+0.095953492 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:29:16 localhost systemd[1]: tmp-crun.QUp2a1.mount: Deactivated successfully. Nov 23 03:29:16 localhost podman[85927]: 2025-11-23 08:29:16.967214796 +0000 UTC m=+0.151247331 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, name=rhosp17/openstack-ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:29:17 localhost podman[85927]: 2025-11-23 08:29:17.000858591 +0000 UTC m=+0.184891116 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step4) Nov 23 03:29:17 localhost podman[85928]: 2025-11-23 08:29:17.011488401 +0000 UTC m=+0.192588364 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64) Nov 23 03:29:17 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:29:17 localhost podman[85928]: 2025-11-23 08:29:17.060443362 +0000 UTC m=+0.241543315 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, container_name=ovn_metadata_agent, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Nov 23 03:29:17 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:29:17 localhost podman[85926]: 2025-11-23 08:29:17.098074632 +0000 UTC m=+0.286491342 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:29:17 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:29:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:29:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:29:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:29:39 localhost systemd[1]: tmp-crun.NjW7bW.mount: Deactivated successfully. Nov 23 03:29:39 localhost podman[86078]: 2025-11-23 08:29:39.909647867 +0000 UTC m=+0.097715647 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step3, name=rhosp17/openstack-collectd, container_name=collectd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12) Nov 23 03:29:39 localhost podman[86080]: 2025-11-23 08:29:39.933947421 +0000 UTC m=+0.118200363 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., container_name=nova_compute) Nov 23 03:29:39 localhost podman[86080]: 2025-11-23 08:29:39.962356215 +0000 UTC m=+0.146609147 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, release=1761123044, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:29:39 localhost podman[86078]: 2025-11-23 08:29:39.990420967 +0000 UTC m=+0.178488687 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Nov 23 03:29:39 localhost podman[86079]: 2025-11-23 08:29:39.998433265 +0000 UTC m=+0.183776060 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, distribution-scope=public, version=17.1.12, container_name=iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 23 03:29:40 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:29:40 localhost podman[86079]: 2025-11-23 08:29:40.030845732 +0000 UTC m=+0.216188477 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:29:40 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:29:40 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:29:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:29:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:29:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:29:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:29:43 localhost podman[86144]: 2025-11-23 08:29:43.90558489 +0000 UTC m=+0.090772471 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, container_name=ceilometer_agent_compute, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4) Nov 23 03:29:43 localhost podman[86144]: 2025-11-23 08:29:43.948353588 +0000 UTC m=+0.133541219 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc.) Nov 23 03:29:43 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:29:43 localhost podman[86146]: 2025-11-23 08:29:43.966776001 +0000 UTC m=+0.142715855 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:29:44 localhost podman[86145]: 2025-11-23 08:29:44.022368858 +0000 UTC m=+0.200716667 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:29:44 localhost podman[86152]: 2025-11-23 08:29:44.070440902 +0000 UTC m=+0.244269800 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, vcs-type=git) Nov 23 03:29:44 localhost podman[86145]: 2025-11-23 08:29:44.081406363 +0000 UTC m=+0.259754122 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:29:44 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:29:44 localhost podman[86152]: 2025-11-23 08:29:44.132576292 +0000 UTC m=+0.306405220 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:29:44 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:29:44 localhost podman[86146]: 2025-11-23 08:29:44.343459525 +0000 UTC m=+0.519399389 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, version=17.1.12, release=1761123044, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Nov 23 03:29:44 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:29:44 localhost systemd[1]: tmp-crun.Y6EtxX.mount: Deactivated successfully. Nov 23 03:29:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:29:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:29:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:29:47 localhost podman[86245]: 2025-11-23 08:29:47.904068001 +0000 UTC m=+0.084788946 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, release=1761123044, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller) Nov 23 03:29:47 localhost podman[86246]: 2025-11-23 08:29:47.954020042 +0000 UTC m=+0.132516558 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Nov 23 03:29:47 localhost podman[86245]: 2025-11-23 08:29:47.959789301 +0000 UTC m=+0.140510216 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container) Nov 23 03:29:47 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:29:48 localhost podman[86244]: 2025-11-23 08:29:48.011804028 +0000 UTC m=+0.195123704 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, config_id=tripleo_step1, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Nov 23 03:29:48 localhost podman[86246]: 2025-11-23 08:29:48.018519266 +0000 UTC m=+0.197015812 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc.) Nov 23 03:29:48 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:29:48 localhost podman[86244]: 2025-11-23 08:29:48.200412377 +0000 UTC m=+0.383732063 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64) Nov 23 03:29:48 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:30:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:30:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:30:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:30:10 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:30:10 localhost recover_tripleo_nova_virtqemud[86385]: 62093 Nov 23 03:30:10 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:30:10 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:30:10 localhost podman[86369]: 2025-11-23 08:30:10.908413959 +0000 UTC m=+0.090738400 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:30:10 localhost podman[86369]: 2025-11-23 08:30:10.944370376 +0000 UTC m=+0.126694757 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, distribution-scope=public, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64) Nov 23 03:30:10 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:30:10 localhost podman[86370]: 2025-11-23 08:30:10.955374578 +0000 UTC m=+0.135164350 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:30:11 localhost podman[86368]: 2025-11-23 08:30:11.002049289 +0000 UTC m=+0.186556398 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git) Nov 23 03:30:11 localhost podman[86370]: 2025-11-23 08:30:11.009463488 +0000 UTC m=+0.189253260 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z) Nov 23 03:30:11 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:30:11 localhost podman[86368]: 2025-11-23 08:30:11.063615381 +0000 UTC m=+0.248122500 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Nov 23 03:30:11 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:30:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:30:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:30:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:30:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:30:14 localhost podman[86434]: 2025-11-23 08:30:14.899355415 +0000 UTC m=+0.080472651 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4) Nov 23 03:30:14 localhost podman[86433]: 2025-11-23 08:30:14.956775189 +0000 UTC m=+0.140984841 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, tcib_managed=true) Nov 23 03:30:15 localhost podman[86433]: 2025-11-23 08:30:15.008242099 +0000 UTC m=+0.192451751 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1761123044, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:30:15 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:30:15 localhost podman[86434]: 2025-11-23 08:30:15.025578007 +0000 UTC m=+0.206695193 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:30:15 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:30:15 localhost podman[86435]: 2025-11-23 08:30:15.010167708 +0000 UTC m=+0.188731245 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:30:15 localhost podman[86436]: 2025-11-23 08:30:15.1148242 +0000 UTC m=+0.288468324 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc.) Nov 23 03:30:15 localhost podman[86436]: 2025-11-23 08:30:15.121616511 +0000 UTC m=+0.295260615 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container) Nov 23 03:30:15 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:30:15 localhost podman[86435]: 2025-11-23 08:30:15.376442038 +0000 UTC m=+0.555005605 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, architecture=x86_64, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, container_name=nova_migration_target, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Nov 23 03:30:15 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:30:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:30:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:30:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:30:18 localhost systemd[1]: tmp-crun.RobgXS.mount: Deactivated successfully. Nov 23 03:30:18 localhost podman[86525]: 2025-11-23 08:30:18.901520442 +0000 UTC m=+0.090194885 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-18T22:49:46Z) Nov 23 03:30:18 localhost podman[86526]: 2025-11-23 08:30:18.953434364 +0000 UTC m=+0.137072940 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, release=1761123044, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1) Nov 23 03:30:18 localhost podman[86526]: 2025-11-23 08:30:18.978387189 +0000 UTC m=+0.162025785 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4) Nov 23 03:30:18 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:30:19 localhost podman[86527]: 2025-11-23 08:30:19.051492421 +0000 UTC m=+0.231613308 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:30:19 localhost podman[86525]: 2025-11-23 08:30:19.085351453 +0000 UTC m=+0.274025926 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team) Nov 23 03:30:19 localhost podman[86527]: 2025-11-23 08:30:19.095342734 +0000 UTC m=+0.275463631 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent) Nov 23 03:30:19 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:30:19 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:30:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:30:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:30:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:30:41 localhost podman[86679]: 2025-11-23 08:30:41.901659399 +0000 UTC m=+0.085724745 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid, release=1761123044, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Nov 23 03:30:41 localhost podman[86679]: 2025-11-23 08:30:41.911921507 +0000 UTC m=+0.095986873 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1761123044, distribution-scope=public, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.openshift.expose-services=) Nov 23 03:30:41 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:30:42 localhost podman[86678]: 2025-11-23 08:30:42.003069989 +0000 UTC m=+0.187872148 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step3, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:30:42 localhost podman[86678]: 2025-11-23 08:30:42.016465036 +0000 UTC m=+0.201267185 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=collectd, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Nov 23 03:30:42 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:30:42 localhost systemd[1]: tmp-crun.l26UxR.mount: Deactivated successfully. Nov 23 03:30:42 localhost podman[86680]: 2025-11-23 08:30:42.109722103 +0000 UTC m=+0.289771124 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, url=https://www.redhat.com) Nov 23 03:30:42 localhost podman[86680]: 2025-11-23 08:30:42.136307629 +0000 UTC m=+0.316356660 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:30:42 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:30:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:30:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:30:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:30:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:30:45 localhost systemd[1]: tmp-crun.6L3vYp.mount: Deactivated successfully. Nov 23 03:30:45 localhost podman[86743]: 2025-11-23 08:30:45.920603445 +0000 UTC m=+0.099114020 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_id=tripleo_step4, release=1761123044, version=17.1.12) Nov 23 03:30:45 localhost systemd[1]: tmp-crun.URy35J.mount: Deactivated successfully. Nov 23 03:30:45 localhost podman[86745]: 2025-11-23 08:30:45.967887044 +0000 UTC m=+0.139234186 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044) Nov 23 03:30:46 localhost podman[86744]: 2025-11-23 08:30:46.0079447 +0000 UTC m=+0.183743621 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi) Nov 23 03:30:46 localhost podman[86743]: 2025-11-23 08:30:46.025448293 +0000 UTC m=+0.203958868 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:30:46 localhost podman[86744]: 2025-11-23 08:30:46.035248097 +0000 UTC m=+0.211047008 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:30:46 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:30:46 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:30:46 localhost podman[86746]: 2025-11-23 08:30:46.123909782 +0000 UTC m=+0.291383594 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, tcib_managed=true) Nov 23 03:30:46 localhost podman[86746]: 2025-11-23 08:30:46.132567691 +0000 UTC m=+0.300041483 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:30:46 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:30:46 localhost podman[86745]: 2025-11-23 08:30:46.331784091 +0000 UTC m=+0.503131193 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc.) Nov 23 03:30:46 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:30:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:30:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:30:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:30:49 localhost systemd[1]: tmp-crun.fx90fd.mount: Deactivated successfully. Nov 23 03:30:49 localhost podman[86841]: 2025-11-23 08:30:49.930062829 +0000 UTC m=+0.112386474 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12) Nov 23 03:30:49 localhost podman[86840]: 2025-11-23 08:30:49.956433177 +0000 UTC m=+0.135851491 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, batch=17.1_20251118.1, vcs-type=git, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:30:50 localhost podman[86842]: 2025-11-23 08:30:50.023916324 +0000 UTC m=+0.201946385 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, release=1761123044, build-date=2025-11-19T00:14:25Z, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:30:50 localhost podman[86841]: 2025-11-23 08:30:50.064493395 +0000 UTC m=+0.246817080 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, container_name=ovn_controller, config_id=tripleo_step4, tcib_managed=true) Nov 23 03:30:50 localhost podman[86842]: 2025-11-23 08:30:50.076597941 +0000 UTC m=+0.254628002 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4) Nov 23 03:30:50 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:30:50 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:30:50 localhost podman[86840]: 2025-11-23 08:30:50.196543188 +0000 UTC m=+0.375961502 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Nov 23 03:30:50 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:30:50 localhost systemd[1]: tmp-crun.WWFGwa.mount: Deactivated successfully. Nov 23 03:31:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:31:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:31:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:31:12 localhost podman[86961]: 2025-11-23 08:31:12.90381441 +0000 UTC m=+0.086405270 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Nov 23 03:31:12 localhost podman[86961]: 2025-11-23 08:31:12.913275612 +0000 UTC m=+0.095866462 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:31:12 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:31:12 localhost podman[86962]: 2025-11-23 08:31:12.963502488 +0000 UTC m=+0.143420026 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vendor=Red Hat, Inc.) Nov 23 03:31:13 localhost podman[86960]: 2025-11-23 08:31:13.006617625 +0000 UTC m=+0.191555667 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd) Nov 23 03:31:13 localhost podman[86960]: 2025-11-23 08:31:13.021374809 +0000 UTC m=+0.206312851 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, release=1761123044, build-date=2025-11-18T22:51:28Z, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:31:13 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:31:13 localhost podman[86962]: 2025-11-23 08:31:13.07175532 +0000 UTC m=+0.251672868 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Nov 23 03:31:13 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:31:13 localhost systemd[1]: tmp-crun.tBHOcr.mount: Deactivated successfully. Nov 23 03:31:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:31:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:31:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:31:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:31:16 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:31:16 localhost recover_tripleo_nova_virtqemud[87049]: 62093 Nov 23 03:31:16 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:31:16 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:31:16 localhost podman[87024]: 2025-11-23 08:31:16.889603787 +0000 UTC m=+0.075963629 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:31:16 localhost podman[87024]: 2025-11-23 08:31:16.938374089 +0000 UTC m=+0.124733921 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4) Nov 23 03:31:16 localhost systemd[1]: tmp-crun.2CFtdz.mount: Deactivated successfully. Nov 23 03:31:16 localhost podman[87026]: 2025-11-23 08:31:16.957704164 +0000 UTC m=+0.140265629 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-nova-compute) Nov 23 03:31:16 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:31:17 localhost podman[87027]: 2025-11-23 08:31:17.011886432 +0000 UTC m=+0.192669922 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:31:17 localhost podman[87027]: 2025-11-23 08:31:17.046313371 +0000 UTC m=+0.227096861 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public) Nov 23 03:31:17 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:31:17 localhost podman[87025]: 2025-11-23 08:31:17.063213292 +0000 UTC m=+0.247083337 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:31:17 localhost podman[87025]: 2025-11-23 08:31:17.086376114 +0000 UTC m=+0.270246149 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1) Nov 23 03:31:17 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:31:17 localhost podman[87026]: 2025-11-23 08:31:17.342596032 +0000 UTC m=+0.525157477 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=) Nov 23 03:31:17 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:31:17 localhost systemd[1]: tmp-crun.QHhAaF.mount: Deactivated successfully. Nov 23 03:31:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:31:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:31:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:31:20 localhost podman[87117]: 2025-11-23 08:31:20.895021209 +0000 UTC m=+0.076057402 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Nov 23 03:31:20 localhost podman[87116]: 2025-11-23 08:31:20.943167221 +0000 UTC m=+0.126355920 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:31:21 localhost podman[87115]: 2025-11-23 08:31:21.001144256 +0000 UTC m=+0.187763471 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:31:21 localhost podman[87116]: 2025-11-23 08:31:21.015518529 +0000 UTC m=+0.198707208 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, release=1761123044, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Nov 23 03:31:21 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:31:21 localhost podman[87117]: 2025-11-23 08:31:21.066183778 +0000 UTC m=+0.247219981 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, config_id=tripleo_step4, container_name=ovn_metadata_agent, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:31:21 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:31:21 localhost podman[87115]: 2025-11-23 08:31:21.204396252 +0000 UTC m=+0.391015447 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Nov 23 03:31:21 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:31:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:31:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:31:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:31:43 localhost systemd[1]: tmp-crun.k6Ehl1.mount: Deactivated successfully. Nov 23 03:31:43 localhost podman[87265]: 2025-11-23 08:31:43.898291941 +0000 UTC m=+0.082029236 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team, container_name=collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64) Nov 23 03:31:43 localhost podman[87266]: 2025-11-23 08:31:43.878550583 +0000 UTC m=+0.062218106 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Nov 23 03:31:43 localhost podman[87265]: 2025-11-23 08:31:43.937345313 +0000 UTC m=+0.121082618 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:31:43 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:31:43 localhost podman[87266]: 2025-11-23 08:31:43.961357162 +0000 UTC m=+0.145024735 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, tcib_managed=true) Nov 23 03:31:43 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:31:43 localhost podman[87267]: 2025-11-23 08:31:43.93108089 +0000 UTC m=+0.112102232 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step5) Nov 23 03:31:44 localhost podman[87267]: 2025-11-23 08:31:44.011625459 +0000 UTC m=+0.192646731 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, name=rhosp17/openstack-nova-compute, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 23 03:31:44 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:31:44 localhost systemd[1]: tmp-crun.Jw088N.mount: Deactivated successfully. Nov 23 03:31:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:31:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:31:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:31:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:31:47 localhost podman[87327]: 2025-11-23 08:31:47.901671728 +0000 UTC m=+0.085801002 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:31:47 localhost podman[87327]: 2025-11-23 08:31:47.92834041 +0000 UTC m=+0.112469684 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, vcs-type=git, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4) Nov 23 03:31:47 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:31:48 localhost podman[87330]: 2025-11-23 08:31:48.009664613 +0000 UTC m=+0.185957045 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team) Nov 23 03:31:48 localhost podman[87330]: 2025-11-23 08:31:48.042586557 +0000 UTC m=+0.218878979 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:31:48 localhost podman[87328]: 2025-11-23 08:31:48.055306108 +0000 UTC m=+0.237417779 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:31:48 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:31:48 localhost podman[87328]: 2025-11-23 08:31:48.111328802 +0000 UTC m=+0.293440443 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:31:48 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:31:48 localhost podman[87329]: 2025-11-23 08:31:48.11252769 +0000 UTC m=+0.291793784 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc.) Nov 23 03:31:48 localhost podman[87329]: 2025-11-23 08:31:48.478247207 +0000 UTC m=+0.657513291 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, version=17.1.12, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:31:48 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:31:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:31:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:31:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:31:51 localhost podman[87422]: 2025-11-23 08:31:51.903313504 +0000 UTC m=+0.088395803 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr) Nov 23 03:31:51 localhost podman[87424]: 2025-11-23 08:31:51.951238209 +0000 UTC m=+0.130827299 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, version=17.1.12, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Nov 23 03:31:52 localhost podman[87423]: 2025-11-23 08:31:52.005297022 +0000 UTC m=+0.187593925 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, container_name=ovn_controller, io.openshift.expose-services=, version=17.1.12, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:31:52 localhost podman[87424]: 2025-11-23 08:31:52.022380928 +0000 UTC m=+0.201969988 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team) Nov 23 03:31:52 localhost podman[87423]: 2025-11-23 08:31:52.028387044 +0000 UTC m=+0.210684007 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64) Nov 23 03:31:52 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:31:52 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:31:52 localhost podman[87422]: 2025-11-23 08:31:52.090329751 +0000 UTC m=+0.275412000 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd) Nov 23 03:31:52 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:32:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 03:32:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 5370 writes, 23K keys, 5370 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5370 writes, 735 syncs, 7.31 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 426 writes, 1655 keys, 426 commit groups, 1.0 writes per commit group, ingest: 1.72 MB, 0.00 MB/s#012Interval WAL: 426 writes, 165 syncs, 2.58 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 03:32:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 03:32:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 5205 writes, 23K keys, 5205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5205 writes, 665 syncs, 7.83 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 538 writes, 2059 keys, 538 commit groups, 1.0 writes per commit group, ingest: 2.79 MB, 0.00 MB/s#012Interval WAL: 538 writes, 204 syncs, 2.64 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 03:32:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:32:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:32:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:32:14 localhost systemd[1]: tmp-crun.JKkicD.mount: Deactivated successfully. Nov 23 03:32:14 localhost podman[87546]: 2025-11-23 08:32:14.915942855 +0000 UTC m=+0.095372237 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, url=https://www.redhat.com) Nov 23 03:32:14 localhost podman[87545]: 2025-11-23 08:32:14.956167243 +0000 UTC m=+0.138642779 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container) Nov 23 03:32:14 localhost podman[87545]: 2025-11-23 08:32:14.991409818 +0000 UTC m=+0.173885354 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step3, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64) Nov 23 03:32:15 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:32:15 localhost podman[87544]: 2025-11-23 08:32:15.004562592 +0000 UTC m=+0.189225745 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.) Nov 23 03:32:15 localhost podman[87544]: 2025-11-23 08:32:15.015396376 +0000 UTC m=+0.200059569 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible) Nov 23 03:32:15 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:32:15 localhost podman[87546]: 2025-11-23 08:32:15.072017999 +0000 UTC m=+0.251447411 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-type=git, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:32:15 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:32:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:32:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:32:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:32:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:32:18 localhost podman[87611]: 2025-11-23 08:32:18.893884931 +0000 UTC m=+0.079431847 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:32:18 localhost systemd[1]: tmp-crun.S4kWcB.mount: Deactivated successfully. Nov 23 03:32:18 localhost podman[87612]: 2025-11-23 08:32:18.92375776 +0000 UTC m=+0.104403565 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi) Nov 23 03:32:18 localhost podman[87614]: 2025-11-23 08:32:18.954234508 +0000 UTC m=+0.133090878 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, batch=17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, architecture=x86_64, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.) Nov 23 03:32:18 localhost podman[87614]: 2025-11-23 08:32:18.96047508 +0000 UTC m=+0.139331450 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.buildah.version=1.41.4, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:32:18 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:32:18 localhost podman[87611]: 2025-11-23 08:32:18.978415203 +0000 UTC m=+0.163962149 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Nov 23 03:32:19 localhost podman[87613]: 2025-11-23 08:32:19.014323678 +0000 UTC m=+0.192565529 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public) Nov 23 03:32:19 localhost podman[87612]: 2025-11-23 08:32:19.02866843 +0000 UTC m=+0.209314275 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, architecture=x86_64, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Nov 23 03:32:19 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:32:19 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:32:19 localhost podman[87613]: 2025-11-23 08:32:19.405914812 +0000 UTC m=+0.584156673 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64) Nov 23 03:32:19 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:32:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:32:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:32:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:32:22 localhost podman[87704]: 2025-11-23 08:32:22.896276998 +0000 UTC m=+0.081544611 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, io.openshift.expose-services=, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:32:22 localhost systemd[1]: tmp-crun.rtZK3Z.mount: Deactivated successfully. Nov 23 03:32:22 localhost podman[87703]: 2025-11-23 08:32:22.957870104 +0000 UTC m=+0.144424146 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, architecture=x86_64, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:32:23 localhost podman[87705]: 2025-11-23 08:32:23.010885326 +0000 UTC m=+0.190252108 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:32:23 localhost podman[87704]: 2025-11-23 08:32:23.026535518 +0000 UTC m=+0.211803201 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git) Nov 23 03:32:23 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:32:23 localhost podman[87705]: 2025-11-23 08:32:23.064414724 +0000 UTC m=+0.243781536 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Nov 23 03:32:23 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:32:23 localhost podman[87703]: 2025-11-23 08:32:23.178349482 +0000 UTC m=+0.364903494 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, version=17.1.12, container_name=metrics_qdr, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true) Nov 23 03:32:23 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:32:23 localhost systemd[1]: tmp-crun.h6zA2h.mount: Deactivated successfully. Nov 23 03:32:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:32:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:32:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:32:45 localhost podman[87852]: 2025-11-23 08:32:45.90468412 +0000 UTC m=+0.086943027 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:32:45 localhost podman[87852]: 2025-11-23 08:32:45.939511693 +0000 UTC m=+0.121770570 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, url=https://www.redhat.com, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z) Nov 23 03:32:45 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:32:46 localhost podman[87854]: 2025-11-23 08:32:45.952021037 +0000 UTC m=+0.131330392 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git) Nov 23 03:32:46 localhost podman[87854]: 2025-11-23 08:32:46.032724552 +0000 UTC m=+0.212033917 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:32:46 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:32:46 localhost podman[87853]: 2025-11-23 08:32:46.006186876 +0000 UTC m=+0.186603666 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step3, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, container_name=iscsid, vcs-type=git) Nov 23 03:32:46 localhost podman[87853]: 2025-11-23 08:32:46.085420034 +0000 UTC m=+0.265836784 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1) Nov 23 03:32:46 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:32:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:32:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:32:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:32:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:32:49 localhost systemd[1]: tmp-crun.tpR3Br.mount: Deactivated successfully. Nov 23 03:32:49 localhost podman[87918]: 2025-11-23 08:32:49.917910771 +0000 UTC m=+0.093872271 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, release=1761123044) Nov 23 03:32:49 localhost podman[87916]: 2025-11-23 08:32:49.897707419 +0000 UTC m=+0.082730008 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git) Nov 23 03:32:49 localhost podman[87917]: 2025-11-23 08:32:49.962621127 +0000 UTC m=+0.142778826 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:12:45Z, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true) Nov 23 03:32:49 localhost podman[87916]: 2025-11-23 08:32:49.986486993 +0000 UTC m=+0.171509612 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64) Nov 23 03:32:49 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:32:50 localhost podman[87917]: 2025-11-23 08:32:50.015609179 +0000 UTC m=+0.195766888 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:32:50 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:32:50 localhost podman[87924]: 2025-11-23 08:32:50.082588831 +0000 UTC m=+0.254667842 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, distribution-scope=public, release=1761123044, vcs-type=git, version=17.1.12) Nov 23 03:32:50 localhost podman[87924]: 2025-11-23 08:32:50.116111153 +0000 UTC m=+0.288190194 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., release=1761123044, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z) Nov 23 03:32:50 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:32:50 localhost podman[87918]: 2025-11-23 08:32:50.264049557 +0000 UTC m=+0.440011147 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-type=git, build-date=2025-11-19T00:36:58Z, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container) Nov 23 03:32:50 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:32:50 localhost systemd[1]: tmp-crun.epk39X.mount: Deactivated successfully. Nov 23 03:32:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:32:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:32:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:32:53 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:32:53 localhost recover_tripleo_nova_virtqemud[88026]: 62093 Nov 23 03:32:53 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:32:53 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:32:53 localhost systemd[1]: tmp-crun.7kt9xU.mount: Deactivated successfully. Nov 23 03:32:53 localhost podman[88010]: 2025-11-23 08:32:53.914636275 +0000 UTC m=+0.091735565 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step4, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:32:53 localhost podman[88009]: 2025-11-23 08:32:53.962771086 +0000 UTC m=+0.140842926 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T23:34:05Z, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:32:53 localhost podman[88008]: 2025-11-23 08:32:53.916923876 +0000 UTC m=+0.101002041 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, container_name=metrics_qdr, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:32:53 localhost podman[88009]: 2025-11-23 08:32:53.995969949 +0000 UTC m=+0.174041789 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, name=rhosp17/openstack-ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64) Nov 23 03:32:54 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:32:54 localhost podman[88010]: 2025-11-23 08:32:54.020941327 +0000 UTC m=+0.198040577 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:32:54 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:32:54 localhost podman[88008]: 2025-11-23 08:32:54.11945283 +0000 UTC m=+0.303531055 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true) Nov 23 03:32:54 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:33:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:33:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:33:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:33:16 localhost podman[88130]: 2025-11-23 08:33:16.90102447 +0000 UTC m=+0.082220392 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_id=tripleo_step3, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12) Nov 23 03:33:16 localhost podman[88130]: 2025-11-23 08:33:16.908644324 +0000 UTC m=+0.089840246 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Nov 23 03:33:16 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:33:16 localhost podman[88129]: 2025-11-23 08:33:16.953739593 +0000 UTC m=+0.138031050 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, release=1761123044) Nov 23 03:33:16 localhost podman[88129]: 2025-11-23 08:33:16.966290498 +0000 UTC m=+0.150581925 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, container_name=collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4) Nov 23 03:33:16 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:33:17 localhost podman[88131]: 2025-11-23 08:33:17.017127914 +0000 UTC m=+0.195638393 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, tcib_managed=true, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:33:17 localhost podman[88131]: 2025-11-23 08:33:17.048336555 +0000 UTC m=+0.226846984 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=nova_compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4) Nov 23 03:33:17 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:33:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:33:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:33:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:33:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:33:20 localhost podman[88193]: 2025-11-23 08:33:20.906175173 +0000 UTC m=+0.091400465 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, name=rhosp17/openstack-ceilometer-compute) Nov 23 03:33:20 localhost podman[88194]: 2025-11-23 08:33:20.959941888 +0000 UTC m=+0.140642880 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, distribution-scope=public) Nov 23 03:33:20 localhost podman[88193]: 2025-11-23 08:33:20.967497511 +0000 UTC m=+0.152722853 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z) Nov 23 03:33:20 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:33:21 localhost podman[88195]: 2025-11-23 08:33:21.011821336 +0000 UTC m=+0.189759783 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z) Nov 23 03:33:21 localhost podman[88196]: 2025-11-23 08:33:21.072672478 +0000 UTC m=+0.248870702 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:33:21 localhost podman[88194]: 2025-11-23 08:33:21.093620104 +0000 UTC m=+0.274321086 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, release=1761123044, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible) Nov 23 03:33:21 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:33:21 localhost podman[88196]: 2025-11-23 08:33:21.109404669 +0000 UTC m=+0.285602843 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:33:21 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:33:21 localhost podman[88195]: 2025-11-23 08:33:21.365589066 +0000 UTC m=+0.543527503 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:33:21 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:33:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:33:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:33:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:33:24 localhost podman[88290]: 2025-11-23 08:33:24.910967844 +0000 UTC m=+0.096317336 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:33:24 localhost podman[88291]: 2025-11-23 08:33:24.957492336 +0000 UTC m=+0.140655130 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, config_id=tripleo_step4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:33:24 localhost podman[88291]: 2025-11-23 08:33:24.982333931 +0000 UTC m=+0.165496745 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12) Nov 23 03:33:24 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:33:25 localhost podman[88292]: 2025-11-23 08:33:25.055147472 +0000 UTC m=+0.234359475 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, config_id=tripleo_step4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1) Nov 23 03:33:25 localhost podman[88292]: 2025-11-23 08:33:25.121408122 +0000 UTC m=+0.300620115 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Nov 23 03:33:25 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:33:25 localhost podman[88290]: 2025-11-23 08:33:25.136414344 +0000 UTC m=+0.321763816 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public) Nov 23 03:33:25 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:33:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:33:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:33:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:33:47 localhost podman[88491]: 2025-11-23 08:33:47.906932714 +0000 UTC m=+0.090043843 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3) Nov 23 03:33:47 localhost podman[88491]: 2025-11-23 08:33:47.917634433 +0000 UTC m=+0.100745572 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, batch=17.1_20251118.1) Nov 23 03:33:47 localhost podman[88493]: 2025-11-23 08:33:47.952081954 +0000 UTC m=+0.132153579 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute) Nov 23 03:33:48 localhost podman[88492]: 2025-11-23 08:33:48.008094168 +0000 UTC m=+0.190676741 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, release=1761123044, io.openshift.expose-services=) Nov 23 03:33:48 localhost podman[88492]: 2025-11-23 08:33:48.022427889 +0000 UTC m=+0.205010472 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step3, container_name=iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Nov 23 03:33:48 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:33:48 localhost podman[88493]: 2025-11-23 08:33:48.061558544 +0000 UTC m=+0.241630129 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5) Nov 23 03:33:48 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:33:48 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:33:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:33:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:33:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:33:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:33:51 localhost podman[88555]: 2025-11-23 08:33:51.899599772 +0000 UTC m=+0.084218993 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step4, release=1761123044, build-date=2025-11-19T00:11:48Z, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true) Nov 23 03:33:51 localhost podman[88555]: 2025-11-23 08:33:51.931897097 +0000 UTC m=+0.116516348 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:33:51 localhost podman[88557]: 2025-11-23 08:33:51.958053022 +0000 UTC m=+0.137043680 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, container_name=nova_migration_target) Nov 23 03:33:52 localhost podman[88556]: 2025-11-23 08:33:52.013683405 +0000 UTC m=+0.195346235 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, release=1761123044, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vcs-type=git) Nov 23 03:33:52 localhost podman[88558]: 2025-11-23 08:33:52.06228284 +0000 UTC m=+0.237798380 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, container_name=logrotate_crond) Nov 23 03:33:52 localhost podman[88558]: 2025-11-23 08:33:52.071576887 +0000 UTC m=+0.247092427 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, distribution-scope=public, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:33:52 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:33:52 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:33:52 localhost podman[88556]: 2025-11-23 08:33:52.122111973 +0000 UTC m=+0.303774773 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, build-date=2025-11-19T00:12:45Z, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:33:52 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:33:52 localhost podman[88557]: 2025-11-23 08:33:52.378068331 +0000 UTC m=+0.557058979 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target) Nov 23 03:33:52 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:33:52 localhost systemd[1]: tmp-crun.9UqW28.mount: Deactivated successfully. Nov 23 03:33:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:33:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:33:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:33:55 localhost systemd[1]: tmp-crun.VfvS5O.mount: Deactivated successfully. Nov 23 03:33:55 localhost podman[88648]: 2025-11-23 08:33:55.950292157 +0000 UTC m=+0.135546334 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:33:55 localhost podman[88649]: 2025-11-23 08:33:55.919574041 +0000 UTC m=+0.097635007 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, config_id=tripleo_step4, release=1761123044, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=ovn_controller, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com) Nov 23 03:33:56 localhost podman[88649]: 2025-11-23 08:33:56.002406821 +0000 UTC m=+0.180467787 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team) Nov 23 03:33:56 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:33:56 localhost podman[88650]: 2025-11-23 08:33:56.053664059 +0000 UTC m=+0.231794137 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:33:56 localhost podman[88650]: 2025-11-23 08:33:56.128585416 +0000 UTC m=+0.306715494 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:33:56 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:33:56 localhost podman[88648]: 2025-11-23 08:33:56.174439377 +0000 UTC m=+0.359693504 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, release=1761123044, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Nov 23 03:33:56 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:34:11 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:34:11 localhost systemd[1]: Starting dnf makecache... Nov 23 03:34:11 localhost recover_tripleo_nova_virtqemud[88747]: 62093 Nov 23 03:34:11 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:34:11 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:34:12 localhost dnf[88746]: Updating Subscription Management repositories. Nov 23 03:34:13 localhost dnf[88746]: Metadata cache refreshed recently. Nov 23 03:34:13 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Nov 23 03:34:13 localhost systemd[1]: Finished dnf makecache. Nov 23 03:34:13 localhost systemd[1]: dnf-makecache.service: Consumed 2.022s CPU time. Nov 23 03:34:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:34:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:34:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:34:18 localhost systemd[1]: tmp-crun.oUSmn5.mount: Deactivated successfully. Nov 23 03:34:18 localhost systemd[1]: tmp-crun.UfnGyg.mount: Deactivated successfully. Nov 23 03:34:18 localhost podman[88749]: 2025-11-23 08:34:18.952383493 +0000 UTC m=+0.136077129 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, vcs-type=git, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:34:18 localhost podman[88749]: 2025-11-23 08:34:18.966399265 +0000 UTC m=+0.150092901 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, container_name=iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Nov 23 03:34:18 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:34:19 localhost podman[88750]: 2025-11-23 08:34:18.916477128 +0000 UTC m=+0.100072201 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, container_name=nova_compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64) Nov 23 03:34:19 localhost podman[88750]: 2025-11-23 08:34:19.050310218 +0000 UTC m=+0.233905261 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, io.buildah.version=1.41.4, config_id=tripleo_step5, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-nova-compute-container) Nov 23 03:34:19 localhost podman[88748]: 2025-11-23 08:34:19.05976987 +0000 UTC m=+0.242262790 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:34:19 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:34:19 localhost podman[88748]: 2025-11-23 08:34:19.073360307 +0000 UTC m=+0.255853287 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:34:19 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:34:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:34:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:34:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:34:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:34:22 localhost systemd[1]: tmp-crun.8k0ohd.mount: Deactivated successfully. Nov 23 03:34:22 localhost podman[88814]: 2025-11-23 08:34:22.899178291 +0000 UTC m=+0.078019253 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:34:22 localhost podman[88814]: 2025-11-23 08:34:22.972682133 +0000 UTC m=+0.151523105 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4) Nov 23 03:34:22 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:34:23 localhost podman[88816]: 2025-11-23 08:34:23.058577408 +0000 UTC m=+0.229014832 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Nov 23 03:34:23 localhost podman[88815]: 2025-11-23 08:34:22.975548811 +0000 UTC m=+0.149627926 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1761123044, config_id=tripleo_step4, container_name=nova_migration_target) Nov 23 03:34:23 localhost podman[88816]: 2025-11-23 08:34:23.094338269 +0000 UTC m=+0.264775643 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:34:23 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:34:23 localhost podman[88813]: 2025-11-23 08:34:23.156246724 +0000 UTC m=+0.336427177 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Nov 23 03:34:23 localhost podman[88813]: 2025-11-23 08:34:23.213425034 +0000 UTC m=+0.393605457 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, distribution-scope=public) Nov 23 03:34:23 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:34:23 localhost podman[88815]: 2025-11-23 08:34:23.36367645 +0000 UTC m=+0.537755565 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git) Nov 23 03:34:23 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:34:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:34:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:34:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:34:26 localhost podman[88902]: 2025-11-23 08:34:26.897119082 +0000 UTC m=+0.086848964 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible) Nov 23 03:34:26 localhost systemd[1]: tmp-crun.XXqPo7.mount: Deactivated successfully. Nov 23 03:34:26 localhost podman[88903]: 2025-11-23 08:34:26.957197982 +0000 UTC m=+0.141637961 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, release=1761123044, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4) Nov 23 03:34:26 localhost podman[88904]: 2025-11-23 08:34:26.931657496 +0000 UTC m=+0.116432545 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-type=git, build-date=2025-11-19T00:14:25Z) Nov 23 03:34:27 localhost podman[88903]: 2025-11-23 08:34:27.000057851 +0000 UTC m=+0.184497880 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 23 03:34:27 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:34:27 localhost podman[88904]: 2025-11-23 08:34:27.015177056 +0000 UTC m=+0.199952115 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, release=1761123044, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Nov 23 03:34:27 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:34:27 localhost podman[88902]: 2025-11-23 08:34:27.081634362 +0000 UTC m=+0.271364244 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com) Nov 23 03:34:27 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:34:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:34:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:34:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:34:49 localhost systemd[1]: tmp-crun.l0KAGl.mount: Deactivated successfully. Nov 23 03:34:49 localhost podman[89059]: 2025-11-23 08:34:49.95801066 +0000 UTC m=+0.136921626 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, tcib_managed=true) Nov 23 03:34:49 localhost podman[89059]: 2025-11-23 08:34:49.993479733 +0000 UTC m=+0.172390749 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:34:50 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:34:50 localhost podman[89058]: 2025-11-23 08:34:50.012472277 +0000 UTC m=+0.192147476 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 23 03:34:50 localhost podman[89058]: 2025-11-23 08:34:50.027239642 +0000 UTC m=+0.206914861 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, name=rhosp17/openstack-iscsid, tcib_managed=true, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=iscsid) Nov 23 03:34:50 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:34:50 localhost podman[89057]: 2025-11-23 08:34:49.929651727 +0000 UTC m=+0.112202184 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, container_name=collectd, name=rhosp17/openstack-collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:34:50 localhost podman[89057]: 2025-11-23 08:34:50.114608991 +0000 UTC m=+0.297159378 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, distribution-scope=public, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3) Nov 23 03:34:50 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:34:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:34:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:34:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:34:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:34:53 localhost podman[89119]: 2025-11-23 08:34:53.901383252 +0000 UTC m=+0.084807511 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team) Nov 23 03:34:53 localhost podman[89121]: 2025-11-23 08:34:53.955255781 +0000 UTC m=+0.134395429 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, distribution-scope=public, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:34:53 localhost podman[89119]: 2025-11-23 08:34:53.958279033 +0000 UTC m=+0.141703302 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, release=1761123044, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:34:53 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:34:54 localhost podman[89122]: 2025-11-23 08:34:54.023186761 +0000 UTC m=+0.196590942 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, version=17.1.12, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Nov 23 03:34:54 localhost podman[89122]: 2025-11-23 08:34:54.06048824 +0000 UTC m=+0.233892381 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, name=rhosp17/openstack-cron) Nov 23 03:34:54 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:34:54 localhost podman[89120]: 2025-11-23 08:34:54.063035979 +0000 UTC m=+0.243897220 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:34:54 localhost podman[89120]: 2025-11-23 08:34:54.147611092 +0000 UTC m=+0.328472313 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4) Nov 23 03:34:54 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:34:54 localhost podman[89121]: 2025-11-23 08:34:54.32232029 +0000 UTC m=+0.501459898 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4) Nov 23 03:34:54 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:34:54 localhost systemd[1]: tmp-crun.wJBCjP.mount: Deactivated successfully. Nov 23 03:34:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:34:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:34:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:34:57 localhost podman[89216]: 2025-11-23 08:34:57.877214982 +0000 UTC m=+0.064753054 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z) Nov 23 03:34:57 localhost podman[89218]: 2025-11-23 08:34:57.959911928 +0000 UTC m=+0.137086481 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:34:58 localhost podman[89217]: 2025-11-23 08:34:58.002421107 +0000 UTC m=+0.183540092 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team) Nov 23 03:34:58 localhost podman[89217]: 2025-11-23 08:34:58.030325096 +0000 UTC m=+0.211444091 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:34:58 localhost podman[89218]: 2025-11-23 08:34:58.036554717 +0000 UTC m=+0.213729230 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, release=1761123044, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.4) Nov 23 03:34:58 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:34:58 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:34:58 localhost podman[89216]: 2025-11-23 08:34:58.102472686 +0000 UTC m=+0.290010758 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1761123044, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, version=17.1.12, container_name=metrics_qdr, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team) Nov 23 03:34:58 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:35:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:35:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:35:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:35:20 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:35:20 localhost recover_tripleo_nova_virtqemud[89327]: 62093 Nov 23 03:35:20 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:35:20 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:35:20 localhost systemd[1]: tmp-crun.ee5BSh.mount: Deactivated successfully. Nov 23 03:35:20 localhost podman[89314]: 2025-11-23 08:35:20.906573878 +0000 UTC m=+0.088187177 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step3, container_name=iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:35:20 localhost podman[89314]: 2025-11-23 08:35:20.942401811 +0000 UTC m=+0.124015120 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=iscsid, version=17.1.12, config_id=tripleo_step3, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:35:20 localhost podman[89315]: 2025-11-23 08:35:20.955843724 +0000 UTC m=+0.134075449 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, container_name=nova_compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:35:20 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:35:21 localhost podman[89313]: 2025-11-23 08:35:21.013881821 +0000 UTC m=+0.198070619 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=collectd, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3) Nov 23 03:35:21 localhost podman[89315]: 2025-11-23 08:35:21.035825996 +0000 UTC m=+0.214057751 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Nov 23 03:35:21 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:35:21 localhost podman[89313]: 2025-11-23 08:35:21.053390377 +0000 UTC m=+0.237579145 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step3, tcib_managed=true) Nov 23 03:35:21 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:35:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:35:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:35:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:35:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:35:24 localhost podman[89381]: 2025-11-23 08:35:24.895953135 +0000 UTC m=+0.082502422 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, version=17.1.12, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, vendor=Red Hat, Inc.) Nov 23 03:35:24 localhost podman[89381]: 2025-11-23 08:35:24.946288974 +0000 UTC m=+0.132838271 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-19T00:12:45Z) Nov 23 03:35:24 localhost systemd[1]: tmp-crun.z5VGxW.mount: Deactivated successfully. Nov 23 03:35:24 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:35:24 localhost podman[89382]: 2025-11-23 08:35:24.959738558 +0000 UTC m=+0.139554717 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, vcs-type=git) Nov 23 03:35:25 localhost podman[89380]: 2025-11-23 08:35:25.011262654 +0000 UTC m=+0.198601995 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, build-date=2025-11-19T00:11:48Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com) Nov 23 03:35:25 localhost podman[89383]: 2025-11-23 08:35:25.07643735 +0000 UTC m=+0.253057741 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, name=rhosp17/openstack-cron, container_name=logrotate_crond, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team) Nov 23 03:35:25 localhost podman[89380]: 2025-11-23 08:35:25.093241637 +0000 UTC m=+0.280580998 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, architecture=x86_64, container_name=ceilometer_agent_compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:35:25 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:35:25 localhost podman[89383]: 2025-11-23 08:35:25.113504681 +0000 UTC m=+0.290125092 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-cron, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044) Nov 23 03:35:25 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:35:25 localhost podman[89382]: 2025-11-23 08:35:25.351441435 +0000 UTC m=+0.531257554 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, version=17.1.12, distribution-scope=public, release=1761123044, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:35:25 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:35:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:35:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:35:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:35:28 localhost podman[89473]: 2025-11-23 08:35:28.904327447 +0000 UTC m=+0.085340058 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Nov 23 03:35:28 localhost podman[89473]: 2025-11-23 08:35:28.95250783 +0000 UTC m=+0.133520451 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4) Nov 23 03:35:28 localhost podman[89471]: 2025-11-23 08:35:28.952420127 +0000 UTC m=+0.140992862 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, container_name=metrics_qdr, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:35:28 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:35:29 localhost podman[89472]: 2025-11-23 08:35:29.004527021 +0000 UTC m=+0.187547215 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4) Nov 23 03:35:29 localhost podman[89472]: 2025-11-23 08:35:29.088548037 +0000 UTC m=+0.271568261 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller) Nov 23 03:35:29 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:35:29 localhost podman[89471]: 2025-11-23 08:35:29.147330147 +0000 UTC m=+0.335902822 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Nov 23 03:35:29 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:35:43 localhost podman[89644]: 2025-11-23 08:35:43.713268299 +0000 UTC m=+0.090446965 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, distribution-scope=public, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_CLEAN=True, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 23 03:35:43 localhost podman[89644]: 2025-11-23 08:35:43.802489955 +0000 UTC m=+0.179668611 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, io.buildah.version=1.33.12, vcs-type=git, ceph=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, release=553, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main) Nov 23 03:35:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:35:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:35:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:35:51 localhost systemd[1]: tmp-crun.oermXU.mount: Deactivated successfully. Nov 23 03:35:51 localhost podman[89791]: 2025-11-23 08:35:51.917221104 +0000 UTC m=+0.092859389 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4) Nov 23 03:35:51 localhost systemd[1]: tmp-crun.w0g2BH.mount: Deactivated successfully. Nov 23 03:35:51 localhost podman[89790]: 2025-11-23 08:35:51.967127921 +0000 UTC m=+0.144866010 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=iscsid, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3) Nov 23 03:35:51 localhost podman[89790]: 2025-11-23 08:35:51.974015594 +0000 UTC m=+0.151753723 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, name=rhosp17/openstack-iscsid, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com) Nov 23 03:35:51 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:35:52 localhost podman[89789]: 2025-11-23 08:35:52.017077819 +0000 UTC m=+0.196054896 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, maintainer=OpenStack TripleO Team) Nov 23 03:35:52 localhost podman[89789]: 2025-11-23 08:35:52.02621708 +0000 UTC m=+0.205194177 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, build-date=2025-11-18T22:51:28Z, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, container_name=collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4) Nov 23 03:35:52 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:35:52 localhost podman[89791]: 2025-11-23 08:35:52.045688909 +0000 UTC m=+0.221327194 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:35:52 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:35:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:35:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:35:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:35:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:35:55 localhost podman[89854]: 2025-11-23 08:35:55.902258129 +0000 UTC m=+0.089194327 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, distribution-scope=public, tcib_managed=true, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:35:55 localhost podman[89856]: 2025-11-23 08:35:55.954510318 +0000 UTC m=+0.136943677 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:35:56 localhost systemd[1]: tmp-crun.gCpMrP.mount: Deactivated successfully. Nov 23 03:35:56 localhost podman[89855]: 2025-11-23 08:35:56.011385928 +0000 UTC m=+0.196754337 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, tcib_managed=true, release=1761123044, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:35:56 localhost podman[89857]: 2025-11-23 08:35:56.057735665 +0000 UTC m=+0.235698587 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20251118.1) Nov 23 03:35:56 localhost podman[89857]: 2025-11-23 08:35:56.065624838 +0000 UTC m=+0.243587740 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=logrotate_crond, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:35:56 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:35:56 localhost podman[89854]: 2025-11-23 08:35:56.07965948 +0000 UTC m=+0.266595678 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 23 03:35:56 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:35:56 localhost podman[89855]: 2025-11-23 08:35:56.117747813 +0000 UTC m=+0.303116262 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:35:56 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:35:56 localhost podman[89856]: 2025-11-23 08:35:56.322465545 +0000 UTC m=+0.504898864 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:35:56 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:35:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:35:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:35:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:35:59 localhost podman[89952]: 2025-11-23 08:35:59.906611147 +0000 UTC m=+0.092448227 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container) Nov 23 03:35:59 localhost systemd[1]: tmp-crun.xNbnnk.mount: Deactivated successfully. Nov 23 03:35:59 localhost podman[89953]: 2025-11-23 08:35:59.968381058 +0000 UTC m=+0.152119543 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, version=17.1.12, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:36:00 localhost podman[89954]: 2025-11-23 08:36:00.006633836 +0000 UTC m=+0.187286537 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent) Nov 23 03:36:00 localhost podman[89954]: 2025-11-23 08:36:00.037269139 +0000 UTC m=+0.217921850 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4) Nov 23 03:36:00 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:36:00 localhost podman[89953]: 2025-11-23 08:36:00.091668423 +0000 UTC m=+0.275406908 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, tcib_managed=true, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, container_name=ovn_controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vcs-type=git, architecture=x86_64) Nov 23 03:36:00 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:36:00 localhost podman[89952]: 2025-11-23 08:36:00.14742232 +0000 UTC m=+0.333259430 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, url=https://www.redhat.com, tcib_managed=true) Nov 23 03:36:00 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:36:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:36:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:36:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:36:22 localhost podman[90053]: 2025-11-23 08:36:22.901958938 +0000 UTC m=+0.082656456 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, container_name=nova_compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container) Nov 23 03:36:22 localhost podman[90053]: 2025-11-23 08:36:22.943027753 +0000 UTC m=+0.123725241 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Nov 23 03:36:22 localhost podman[90051]: 2025-11-23 08:36:22.945061245 +0000 UTC m=+0.133591553 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd) Nov 23 03:36:22 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:36:23 localhost podman[90052]: 2025-11-23 08:36:23.005925398 +0000 UTC m=+0.191237417 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, tcib_managed=true, batch=17.1_20251118.1, container_name=iscsid, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc.) Nov 23 03:36:23 localhost podman[90051]: 2025-11-23 08:36:23.032399323 +0000 UTC m=+0.220929621 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:36:23 localhost podman[90052]: 2025-11-23 08:36:23.04335998 +0000 UTC m=+0.228671999 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, container_name=iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid) Nov 23 03:36:23 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:36:23 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:36:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:36:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:36:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:36:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:36:26 localhost podman[90116]: 2025-11-23 08:36:26.899120905 +0000 UTC m=+0.086213395 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:36:26 localhost systemd[1]: tmp-crun.n1Uoev.mount: Deactivated successfully. Nov 23 03:36:26 localhost podman[90117]: 2025-11-23 08:36:26.964749195 +0000 UTC m=+0.147395168 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:36:26 localhost podman[90116]: 2025-11-23 08:36:26.975556368 +0000 UTC m=+0.162648878 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true) Nov 23 03:36:26 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:36:27 localhost podman[90117]: 2025-11-23 08:36:27.013361722 +0000 UTC m=+0.196007685 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12) Nov 23 03:36:27 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:36:27 localhost podman[90119]: 2025-11-23 08:36:27.065671892 +0000 UTC m=+0.245620142 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., container_name=logrotate_crond) Nov 23 03:36:27 localhost podman[90119]: 2025-11-23 08:36:27.103422354 +0000 UTC m=+0.283370584 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vcs-type=git, build-date=2025-11-18T22:49:32Z, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, container_name=logrotate_crond, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4) Nov 23 03:36:27 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:36:27 localhost podman[90118]: 2025-11-23 08:36:27.114723582 +0000 UTC m=+0.294233409 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:36:27 localhost podman[90118]: 2025-11-23 08:36:27.47265517 +0000 UTC m=+0.652165017 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git) Nov 23 03:36:27 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:36:27 localhost systemd[1]: tmp-crun.I5GMlG.mount: Deactivated successfully. Nov 23 03:36:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:36:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:36:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:36:30 localhost podman[90215]: 2025-11-23 08:36:30.87670128 +0000 UTC m=+0.064467966 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:36:30 localhost systemd[1]: tmp-crun.SLp5Wa.mount: Deactivated successfully. Nov 23 03:36:30 localhost podman[90215]: 2025-11-23 08:36:30.939681328 +0000 UTC m=+0.127448014 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn) Nov 23 03:36:30 localhost podman[90214]: 2025-11-23 08:36:30.942795514 +0000 UTC m=+0.130495168 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1) Nov 23 03:36:30 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:36:30 localhost podman[90213]: 2025-11-23 08:36:30.9595625 +0000 UTC m=+0.147530502 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1761123044, container_name=metrics_qdr, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1) Nov 23 03:36:31 localhost podman[90214]: 2025-11-23 08:36:31.023596341 +0000 UTC m=+0.211295945 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller) Nov 23 03:36:31 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:36:31 localhost podman[90213]: 2025-11-23 08:36:31.174829967 +0000 UTC m=+0.362797919 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public) Nov 23 03:36:31 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:36:45 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:36:45 localhost recover_tripleo_nova_virtqemud[90302]: 62093 Nov 23 03:36:45 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:36:45 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:36:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:36:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:36:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:36:53 localhost podman[90365]: 2025-11-23 08:36:53.89833561 +0000 UTC m=+0.084963838 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, distribution-scope=public, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible) Nov 23 03:36:53 localhost podman[90365]: 2025-11-23 08:36:53.90843673 +0000 UTC m=+0.095064998 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., container_name=collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container) Nov 23 03:36:53 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:36:53 localhost podman[90366]: 2025-11-23 08:36:53.96007686 +0000 UTC m=+0.144282543 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, architecture=x86_64, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Nov 23 03:36:53 localhost podman[90366]: 2025-11-23 08:36:53.993825459 +0000 UTC m=+0.178031132 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, config_id=tripleo_step3, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Nov 23 03:36:54 localhost systemd[1]: tmp-crun.jfhOkN.mount: Deactivated successfully. Nov 23 03:36:54 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:36:54 localhost podman[90367]: 2025-11-23 08:36:54.013885676 +0000 UTC m=+0.196435058 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, url=https://www.redhat.com, config_id=tripleo_step5, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:36:54 localhost podman[90367]: 2025-11-23 08:36:54.048275445 +0000 UTC m=+0.230824807 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, release=1761123044, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, url=https://www.redhat.com) Nov 23 03:36:54 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:36:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:36:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:36:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:36:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:36:57 localhost systemd[1]: tmp-crun.fRVvDK.mount: Deactivated successfully. Nov 23 03:36:57 localhost podman[90430]: 2025-11-23 08:36:57.911462528 +0000 UTC m=+0.096061708 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute) Nov 23 03:36:57 localhost systemd[1]: tmp-crun.vTGYPV.mount: Deactivated successfully. Nov 23 03:36:57 localhost podman[90433]: 2025-11-23 08:36:57.966224524 +0000 UTC m=+0.143361315 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044) Nov 23 03:36:58 localhost podman[90432]: 2025-11-23 08:36:58.013699955 +0000 UTC m=+0.191815186 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible) Nov 23 03:36:58 localhost podman[90430]: 2025-11-23 08:36:58.018191183 +0000 UTC m=+0.202790433 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, container_name=ceilometer_agent_compute, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:36:58 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:36:58 localhost podman[90431]: 2025-11-23 08:36:58.069278255 +0000 UTC m=+0.251592805 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:36:58 localhost podman[90433]: 2025-11-23 08:36:58.089247441 +0000 UTC m=+0.266384272 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Nov 23 03:36:58 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:36:58 localhost podman[90431]: 2025-11-23 08:36:58.129324934 +0000 UTC m=+0.311639504 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1) Nov 23 03:36:58 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:36:58 localhost podman[90432]: 2025-11-23 08:36:58.377260196 +0000 UTC m=+0.555375437 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12) Nov 23 03:36:58 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:37:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:37:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:37:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:37:01 localhost podman[90528]: 2025-11-23 08:37:01.904150867 +0000 UTC m=+0.088216196 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, distribution-scope=public, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:37:01 localhost systemd[1]: tmp-crun.9tibHe.mount: Deactivated successfully. Nov 23 03:37:01 localhost podman[90529]: 2025-11-23 08:37:01.977451304 +0000 UTC m=+0.155812407 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true) Nov 23 03:37:02 localhost podman[90529]: 2025-11-23 08:37:02.004466486 +0000 UTC m=+0.182827589 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 23 03:37:02 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:37:02 localhost podman[90530]: 2025-11-23 08:37:02.068334372 +0000 UTC m=+0.244695223 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:37:02 localhost podman[90528]: 2025-11-23 08:37:02.135377276 +0000 UTC m=+0.319442595 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, distribution-scope=public) Nov 23 03:37:02 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:37:02 localhost podman[90530]: 2025-11-23 08:37:02.189222563 +0000 UTC m=+0.365583414 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Nov 23 03:37:02 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:37:02 localhost systemd[1]: tmp-crun.SEGyp9.mount: Deactivated successfully. Nov 23 03:37:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:37:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:37:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:37:24 localhost podman[90605]: 2025-11-23 08:37:24.907451656 +0000 UTC m=+0.086769952 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, architecture=x86_64, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, vcs-type=git) Nov 23 03:37:24 localhost podman[90605]: 2025-11-23 08:37:24.915511864 +0000 UTC m=+0.094830160 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, container_name=collectd, tcib_managed=true) Nov 23 03:37:24 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:37:24 localhost podman[90606]: 2025-11-23 08:37:24.959367033 +0000 UTC m=+0.137102571 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, release=1761123044, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc.) Nov 23 03:37:24 localhost podman[90606]: 2025-11-23 08:37:24.998611821 +0000 UTC m=+0.176347409 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64) Nov 23 03:37:25 localhost podman[90607]: 2025-11-23 08:37:25.00767929 +0000 UTC m=+0.183477458 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, container_name=nova_compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 23 03:37:25 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:37:25 localhost podman[90607]: 2025-11-23 08:37:25.036674493 +0000 UTC m=+0.212472702 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:37:25 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:37:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:37:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:37:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:37:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:37:28 localhost podman[90670]: 2025-11-23 08:37:28.892312464 +0000 UTC m=+0.080828660 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, version=17.1.12, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:37:28 localhost systemd[1]: tmp-crun.IajSRG.mount: Deactivated successfully. Nov 23 03:37:28 localhost podman[90670]: 2025-11-23 08:37:28.952334272 +0000 UTC m=+0.140850438 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step4) Nov 23 03:37:28 localhost podman[90675]: 2025-11-23 08:37:28.951916548 +0000 UTC m=+0.131540440 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, tcib_managed=true, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:37:28 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:37:29 localhost podman[90671]: 2025-11-23 08:37:29.00523419 +0000 UTC m=+0.190433344 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:37:29 localhost podman[90672]: 2025-11-23 08:37:29.061817702 +0000 UTC m=+0.245013594 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step4) Nov 23 03:37:29 localhost podman[90671]: 2025-11-23 08:37:29.084733087 +0000 UTC m=+0.269932241 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-19T00:12:45Z, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:37:29 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:37:29 localhost podman[90675]: 2025-11-23 08:37:29.137592605 +0000 UTC m=+0.317216497 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64) Nov 23 03:37:29 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:37:29 localhost podman[90672]: 2025-11-23 08:37:29.433125252 +0000 UTC m=+0.616321174 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, container_name=nova_migration_target, vcs-type=git, version=17.1.12, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com) Nov 23 03:37:29 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:37:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:37:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:37:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:37:32 localhost podman[90765]: 2025-11-23 08:37:32.898275781 +0000 UTC m=+0.088009340 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 23 03:37:32 localhost podman[90766]: 2025-11-23 08:37:32.950578052 +0000 UTC m=+0.135476593 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, batch=17.1_20251118.1) Nov 23 03:37:33 localhost podman[90767]: 2025-11-23 08:37:32.999971991 +0000 UTC m=+0.184067696 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:37:33 localhost podman[90766]: 2025-11-23 08:37:33.023697652 +0000 UTC m=+0.208596193 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:37:33 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:37:33 localhost podman[90767]: 2025-11-23 08:37:33.071290447 +0000 UTC m=+0.255386152 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12) Nov 23 03:37:33 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:37:33 localhost podman[90765]: 2025-11-23 08:37:33.10030284 +0000 UTC m=+0.290036359 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, version=17.1.12) Nov 23 03:37:33 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:37:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:37:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:37:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:37:55 localhost podman[90921]: 2025-11-23 08:37:55.906930021 +0000 UTC m=+0.086090561 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=iscsid, tcib_managed=true, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:37:55 localhost podman[90921]: 2025-11-23 08:37:55.951016068 +0000 UTC m=+0.130176658 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, container_name=iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:37:55 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:37:55 localhost sshd[90969]: main: sshd: ssh-rsa algorithm is disabled Nov 23 03:37:56 localhost podman[90922]: 2025-11-23 08:37:55.95076351 +0000 UTC m=+0.129111895 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-nova-compute-container, release=1761123044, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team) Nov 23 03:37:56 localhost systemd[1]: tmp-crun.6491QK.mount: Deactivated successfully. Nov 23 03:37:56 localhost podman[90920]: 2025-11-23 08:37:56.019569759 +0000 UTC m=+0.199604236 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git) Nov 23 03:37:56 localhost podman[90920]: 2025-11-23 08:37:56.026836992 +0000 UTC m=+0.206871529 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:37:56 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:37:56 localhost podman[90922]: 2025-11-23 08:37:56.080977179 +0000 UTC m=+0.259325614 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044) Nov 23 03:37:56 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:37:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:37:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:37:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:37:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:37:59 localhost podman[90988]: 2025-11-23 08:37:59.9058032 +0000 UTC m=+0.085717310 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, tcib_managed=true, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute) Nov 23 03:37:59 localhost podman[90988]: 2025-11-23 08:37:59.980298193 +0000 UTC m=+0.160212303 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:37:59 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:38:00 localhost podman[90990]: 2025-11-23 08:38:00.001192797 +0000 UTC m=+0.179112895 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.) Nov 23 03:38:00 localhost podman[90989]: 2025-11-23 08:38:00.051583428 +0000 UTC m=+0.230796725 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step4) Nov 23 03:38:00 localhost podman[90991]: 2025-11-23 08:37:59.963080913 +0000 UTC m=+0.135634416 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, container_name=logrotate_crond, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, managed_by=tripleo_ansible) Nov 23 03:38:00 localhost podman[90989]: 2025-11-23 08:38:00.078978691 +0000 UTC m=+0.258191948 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:38:00 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:38:00 localhost podman[90991]: 2025-11-23 08:38:00.096302245 +0000 UTC m=+0.268855728 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:38:00 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:38:00 localhost podman[90990]: 2025-11-23 08:38:00.382175025 +0000 UTC m=+0.560095143 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:38:00 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:38:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:38:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:38:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:38:03 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:38:03 localhost recover_tripleo_nova_virtqemud[91102]: 62093 Nov 23 03:38:03 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:38:03 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:38:03 localhost systemd[1]: tmp-crun.1oDBWG.mount: Deactivated successfully. Nov 23 03:38:03 localhost podman[91083]: 2025-11-23 08:38:03.942598228 +0000 UTC m=+0.126019161 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com) Nov 23 03:38:03 localhost podman[91084]: 2025-11-23 08:38:03.910329864 +0000 UTC m=+0.092556660 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, release=1761123044, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=ovn_controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:38:03 localhost podman[91084]: 2025-11-23 08:38:03.993165994 +0000 UTC m=+0.175392730 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible) Nov 23 03:38:03 localhost podman[91085]: 2025-11-23 08:38:03.994764193 +0000 UTC m=+0.172385937 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Nov 23 03:38:04 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:38:04 localhost podman[91085]: 2025-11-23 08:38:04.075170758 +0000 UTC m=+0.252792532 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git) Nov 23 03:38:04 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:38:04 localhost podman[91083]: 2025-11-23 08:38:04.130300826 +0000 UTC m=+0.313721749 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:38:04 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:38:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:38:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:38:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:38:26 localhost systemd[1]: tmp-crun.MdS12i.mount: Deactivated successfully. Nov 23 03:38:26 localhost podman[91160]: 2025-11-23 08:38:26.910198762 +0000 UTC m=+0.095801171 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, container_name=collectd) Nov 23 03:38:26 localhost podman[91160]: 2025-11-23 08:38:26.948438329 +0000 UTC m=+0.134040708 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Nov 23 03:38:26 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:38:27 localhost podman[91161]: 2025-11-23 08:38:27.003534145 +0000 UTC m=+0.186020268 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, container_name=iscsid, architecture=x86_64, managed_by=tripleo_ansible) Nov 23 03:38:27 localhost podman[91161]: 2025-11-23 08:38:27.015257296 +0000 UTC m=+0.197743439 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, tcib_managed=true, config_id=tripleo_step3, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, release=1761123044, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1) Nov 23 03:38:27 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:38:27 localhost podman[91162]: 2025-11-23 08:38:26.953821874 +0000 UTC m=+0.132973124 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, version=17.1.12, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com) Nov 23 03:38:27 localhost podman[91162]: 2025-11-23 08:38:27.087275083 +0000 UTC m=+0.266426323 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:38:27 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:38:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:38:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:38:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:38:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:38:30 localhost podman[91225]: 2025-11-23 08:38:30.903610453 +0000 UTC m=+0.090099644 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=) Nov 23 03:38:30 localhost podman[91227]: 2025-11-23 08:38:30.950353423 +0000 UTC m=+0.130523690 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 23 03:38:30 localhost podman[91225]: 2025-11-23 08:38:30.959486463 +0000 UTC m=+0.145975644 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:38:30 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:38:31 localhost podman[91226]: 2025-11-23 08:38:31.006502981 +0000 UTC m=+0.190385212 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Nov 23 03:38:31 localhost podman[91228]: 2025-11-23 08:38:31.05358054 +0000 UTC m=+0.232974403 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1) Nov 23 03:38:31 localhost podman[91226]: 2025-11-23 08:38:31.089606529 +0000 UTC m=+0.273488690 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12) Nov 23 03:38:31 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:38:31 localhost podman[91228]: 2025-11-23 08:38:31.140831656 +0000 UTC m=+0.320225579 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Nov 23 03:38:31 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:38:31 localhost podman[91227]: 2025-11-23 08:38:31.283422016 +0000 UTC m=+0.463592313 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, release=1761123044, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:38:31 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:38:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:38:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:38:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:38:34 localhost podman[91315]: 2025-11-23 08:38:34.899768 +0000 UTC m=+0.083432430 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:38:34 localhost podman[91317]: 2025-11-23 08:38:34.953257956 +0000 UTC m=+0.130413206 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:38:35 localhost podman[91316]: 2025-11-23 08:38:35.009322322 +0000 UTC m=+0.189145934 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:38:35 localhost podman[91317]: 2025-11-23 08:38:35.0203063 +0000 UTC m=+0.197461480 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1) Nov 23 03:38:35 localhost podman[91316]: 2025-11-23 08:38:35.031247976 +0000 UTC m=+0.211071578 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, architecture=x86_64, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 23 03:38:35 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:38:35 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:38:35 localhost podman[91315]: 2025-11-23 08:38:35.091494361 +0000 UTC m=+0.275158781 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step1, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public) Nov 23 03:38:35 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:38:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:38:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:38:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:38:57 localhost podman[91469]: 2025-11-23 08:38:57.899711222 +0000 UTC m=+0.083164082 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:38:57 localhost podman[91469]: 2025-11-23 08:38:57.915371383 +0000 UTC m=+0.098824193 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=collectd, build-date=2025-11-18T22:51:28Z) Nov 23 03:38:57 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:38:57 localhost podman[91471]: 2025-11-23 08:38:57.964576378 +0000 UTC m=+0.144036705 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step5, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:38:58 localhost podman[91470]: 2025-11-23 08:38:58.048129921 +0000 UTC m=+0.230218229 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z) Nov 23 03:38:58 localhost podman[91470]: 2025-11-23 08:38:58.061480531 +0000 UTC m=+0.243568909 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Nov 23 03:38:58 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:38:58 localhost podman[91471]: 2025-11-23 08:38:58.074925975 +0000 UTC m=+0.254386302 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, release=1761123044) Nov 23 03:38:58 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:39:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:39:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:39:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:39:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:39:01 localhost podman[91540]: 2025-11-23 08:39:01.904180404 +0000 UTC m=+0.080658164 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, release=1761123044, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:39:01 localhost podman[91540]: 2025-11-23 08:39:01.937130099 +0000 UTC m=+0.113607839 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:39:01 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:39:01 localhost podman[91537]: 2025-11-23 08:39:01.959562889 +0000 UTC m=+0.140929209 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=) Nov 23 03:39:02 localhost podman[91538]: 2025-11-23 08:39:02.017882905 +0000 UTC m=+0.197666747 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1) Nov 23 03:39:02 localhost podman[91538]: 2025-11-23 08:39:02.046835396 +0000 UTC m=+0.226619228 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, release=1761123044) Nov 23 03:39:02 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:39:02 localhost podman[91539]: 2025-11-23 08:39:02.084853516 +0000 UTC m=+0.262264105 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z) Nov 23 03:39:02 localhost podman[91537]: 2025-11-23 08:39:02.091190861 +0000 UTC m=+0.272557181 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, version=17.1.12, tcib_managed=true) Nov 23 03:39:02 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:39:02 localhost podman[91539]: 2025-11-23 08:39:02.517513215 +0000 UTC m=+0.694923793 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:39:02 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:39:02 localhost systemd[1]: tmp-crun.tTAHT9.mount: Deactivated successfully. Nov 23 03:39:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:39:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:39:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:39:05 localhost systemd[1]: tmp-crun.c2ZWt2.mount: Deactivated successfully. Nov 23 03:39:05 localhost podman[91633]: 2025-11-23 08:39:05.914672972 +0000 UTC m=+0.103565779 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, distribution-scope=public, vendor=Red Hat, Inc.) Nov 23 03:39:05 localhost podman[91634]: 2025-11-23 08:39:05.960925685 +0000 UTC m=+0.143785647 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, architecture=x86_64, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:39:05 localhost podman[91635]: 2025-11-23 08:39:05.926146505 +0000 UTC m=+0.106218011 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=ovn_metadata_agent, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Nov 23 03:39:05 localhost podman[91634]: 2025-11-23 08:39:05.987324448 +0000 UTC m=+0.170184370 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container) Nov 23 03:39:06 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:39:06 localhost podman[91635]: 2025-11-23 08:39:06.010415139 +0000 UTC m=+0.190486535 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, container_name=ovn_metadata_agent, release=1761123044) Nov 23 03:39:06 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:39:06 localhost podman[91633]: 2025-11-23 08:39:06.116762283 +0000 UTC m=+0.305655080 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:39:06 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:39:06 localhost systemd[1]: tmp-crun.jsHVZ6.mount: Deactivated successfully. Nov 23 03:39:22 localhost sshd[91705]: main: sshd: ssh-rsa algorithm is disabled Nov 23 03:39:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:39:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:39:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:39:28 localhost systemd[1]: tmp-crun.gys3He.mount: Deactivated successfully. Nov 23 03:39:28 localhost podman[91708]: 2025-11-23 08:39:28.960242896 +0000 UTC m=+0.143157389 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, release=1761123044, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:39:28 localhost podman[91708]: 2025-11-23 08:39:28.999365193 +0000 UTC m=+0.182279716 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:39:29 localhost podman[91709]: 2025-11-23 08:39:28.999505018 +0000 UTC m=+0.178562902 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=nova_compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, distribution-scope=public) Nov 23 03:39:29 localhost podman[91707]: 2025-11-23 08:39:28.921546282 +0000 UTC m=+0.105999462 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, version=17.1.12, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container) Nov 23 03:39:29 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:39:29 localhost podman[91709]: 2025-11-23 08:39:29.105713625 +0000 UTC m=+0.284771489 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step5, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Nov 23 03:39:29 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:39:29 localhost podman[91707]: 2025-11-23 08:39:29.157638268 +0000 UTC m=+0.342091408 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:39:29 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:39:29 localhost systemd[1]: tmp-crun.gZ5FvA.mount: Deactivated successfully. Nov 23 03:39:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:39:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:39:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:39:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:39:32 localhost systemd[1]: tmp-crun.D2W1lP.mount: Deactivated successfully. Nov 23 03:39:32 localhost podman[91775]: 2025-11-23 08:39:32.911329022 +0000 UTC m=+0.094140916 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:39:32 localhost podman[91775]: 2025-11-23 08:39:32.947276191 +0000 UTC m=+0.130088085 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute) Nov 23 03:39:32 localhost systemd[1]: tmp-crun.mBsKTv.mount: Deactivated successfully. Nov 23 03:39:32 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:39:32 localhost podman[91777]: 2025-11-23 08:39:32.969432004 +0000 UTC m=+0.145567523 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute) Nov 23 03:39:32 localhost podman[91778]: 2025-11-23 08:39:32.930193284 +0000 UTC m=+0.103886497 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=logrotate_crond, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:39:33 localhost podman[91778]: 2025-11-23 08:39:33.01010648 +0000 UTC m=+0.183799703 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:39:33 localhost podman[91776]: 2025-11-23 08:39:33.023650848 +0000 UTC m=+0.203188511 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, distribution-scope=public, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Nov 23 03:39:33 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:39:33 localhost podman[91776]: 2025-11-23 08:39:33.074756965 +0000 UTC m=+0.254294628 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1) Nov 23 03:39:33 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:39:33 localhost podman[91777]: 2025-11-23 08:39:33.333368215 +0000 UTC m=+0.509503734 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container) Nov 23 03:39:33 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:39:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:39:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:39:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:39:36 localhost systemd[1]: tmp-crun.9rnzb3.mount: Deactivated successfully. Nov 23 03:39:36 localhost podman[91866]: 2025-11-23 08:39:36.906601501 +0000 UTC m=+0.091968730 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd) Nov 23 03:39:36 localhost podman[91867]: 2025-11-23 08:39:36.957116799 +0000 UTC m=+0.140149696 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, tcib_managed=true, release=1761123044, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:39:36 localhost podman[91867]: 2025-11-23 08:39:36.98533825 +0000 UTC m=+0.168371137 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z) Nov 23 03:39:36 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Deactivated successfully. Nov 23 03:39:37 localhost podman[91868]: 2025-11-23 08:39:37.007304878 +0000 UTC m=+0.187251010 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:39:37 localhost podman[91868]: 2025-11-23 08:39:37.053325938 +0000 UTC m=+0.233272050 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team) Nov 23 03:39:37 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:39:37 localhost podman[91866]: 2025-11-23 08:39:37.116284121 +0000 UTC m=+0.301651350 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, name=rhosp17/openstack-qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Nov 23 03:39:37 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:39:37 localhost systemd[1]: tmp-crun.vYKB1F.mount: Deactivated successfully. Nov 23 03:39:50 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:39:50 localhost recover_tripleo_nova_virtqemud[91956]: 62093 Nov 23 03:39:50 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:39:50 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:39:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:39:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:39:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:39:59 localhost systemd[1]: tmp-crun.D9rISo.mount: Deactivated successfully. Nov 23 03:39:59 localhost podman[92020]: 2025-11-23 08:39:59.900169131 +0000 UTC m=+0.083409495 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, vendor=Red Hat, Inc., config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:39:59 localhost podman[92020]: 2025-11-23 08:39:59.933861881 +0000 UTC m=+0.117102225 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc.) Nov 23 03:39:59 localhost systemd[1]: tmp-crun.Owtgwg.mount: Deactivated successfully. Nov 23 03:39:59 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:39:59 localhost podman[92018]: 2025-11-23 08:39:59.951470534 +0000 UTC m=+0.134411159 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git) Nov 23 03:39:59 localhost podman[92018]: 2025-11-23 08:39:59.991318893 +0000 UTC m=+0.174259528 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, container_name=collectd, version=17.1.12, vendor=Red Hat, Inc.) Nov 23 03:40:00 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:40:00 localhost podman[92019]: 2025-11-23 08:39:59.997874186 +0000 UTC m=+0.180843612 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3) Nov 23 03:40:00 localhost podman[92019]: 2025-11-23 08:40:00.083390254 +0000 UTC m=+0.266359630 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3) Nov 23 03:40:00 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:40:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:40:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:40:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:40:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:40:03 localhost podman[92079]: 2025-11-23 08:40:03.914744345 +0000 UTC m=+0.097560131 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Nov 23 03:40:03 localhost podman[92081]: 2025-11-23 08:40:03.965419249 +0000 UTC m=+0.140021892 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:40:04 localhost podman[92079]: 2025-11-23 08:40:04.018647002 +0000 UTC m=+0.201462778 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 23 03:40:04 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:40:04 localhost podman[92080]: 2025-11-23 08:40:04.019161417 +0000 UTC m=+0.198493806 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, distribution-scope=public) Nov 23 03:40:04 localhost podman[92082]: 2025-11-23 08:40:04.072372839 +0000 UTC m=+0.246297141 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, version=17.1.12, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:40:04 localhost podman[92082]: 2025-11-23 08:40:04.081293474 +0000 UTC m=+0.255217816 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:40:04 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:40:04 localhost podman[92080]: 2025-11-23 08:40:04.101426286 +0000 UTC m=+0.280758645 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:40:04 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:40:04 localhost podman[92081]: 2025-11-23 08:40:04.333331582 +0000 UTC m=+0.507934185 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git) Nov 23 03:40:04 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:40:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:40:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:40:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:40:07 localhost systemd[1]: tmp-crun.zcuBvg.mount: Deactivated successfully. Nov 23 03:40:07 localhost podman[92174]: 2025-11-23 08:40:07.907913648 +0000 UTC m=+0.094868548 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Nov 23 03:40:07 localhost systemd[1]: tmp-crun.HzQMXA.mount: Deactivated successfully. Nov 23 03:40:07 localhost podman[92174]: 2025-11-23 08:40:07.983197132 +0000 UTC m=+0.170152012 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, architecture=x86_64, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible) Nov 23 03:40:07 localhost podman[92174]: unhealthy Nov 23 03:40:07 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:40:07 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:40:08 localhost podman[92173]: 2025-11-23 08:40:08.007036718 +0000 UTC m=+0.196107323 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd) Nov 23 03:40:08 localhost podman[92175]: 2025-11-23 08:40:07.972481291 +0000 UTC m=+0.152212708 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 23 03:40:08 localhost podman[92175]: 2025-11-23 08:40:08.053256874 +0000 UTC m=+0.232988291 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git) Nov 23 03:40:08 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:40:08 localhost podman[92173]: 2025-11-23 08:40:08.222577488 +0000 UTC m=+0.411648093 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, release=1761123044, distribution-scope=public, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:40:08 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:40:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:40:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:40:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:40:30 localhost podman[92249]: 2025-11-23 08:40:30.902792448 +0000 UTC m=+0.088460422 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, distribution-scope=public, com.redhat.component=openstack-collectd-container, vcs-type=git, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Nov 23 03:40:30 localhost podman[92249]: 2025-11-23 08:40:30.915590593 +0000 UTC m=+0.101258537 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true) Nov 23 03:40:30 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:40:30 localhost podman[92251]: 2025-11-23 08:40:30.998846292 +0000 UTC m=+0.179171241 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=) Nov 23 03:40:31 localhost podman[92251]: 2025-11-23 08:40:31.057800661 +0000 UTC m=+0.238125600 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z) Nov 23 03:40:31 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:40:31 localhost podman[92250]: 2025-11-23 08:40:31.061793374 +0000 UTC m=+0.245840278 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, container_name=iscsid, config_id=tripleo_step3) Nov 23 03:40:31 localhost podman[92250]: 2025-11-23 08:40:31.146322512 +0000 UTC m=+0.330369426 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-18T23:44:13Z, tcib_managed=true) Nov 23 03:40:31 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:40:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:40:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:40:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:40:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:40:34 localhost podman[92311]: 2025-11-23 08:40:34.902579215 +0000 UTC m=+0.088237023 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:40:34 localhost podman[92311]: 2025-11-23 08:40:34.953890109 +0000 UTC m=+0.139547957 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute) Nov 23 03:40:34 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:40:35 localhost podman[92313]: 2025-11-23 08:40:35.011587259 +0000 UTC m=+0.190698395 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:40:35 localhost podman[92312]: 2025-11-23 08:40:34.963747533 +0000 UTC m=+0.143904041 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, version=17.1.12, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:40:35 localhost podman[92312]: 2025-11-23 08:40:35.049307004 +0000 UTC m=+0.229463522 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:40:35 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:40:35 localhost podman[92315]: 2025-11-23 08:40:35.065194483 +0000 UTC m=+0.241349458 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Nov 23 03:40:35 localhost podman[92315]: 2025-11-23 08:40:35.105361204 +0000 UTC m=+0.281516199 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, config_id=tripleo_step4, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com) Nov 23 03:40:35 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:40:35 localhost podman[92313]: 2025-11-23 08:40:35.423269193 +0000 UTC m=+0.602380369 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team) Nov 23 03:40:35 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:40:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:40:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:40:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:40:38 localhost podman[92408]: 2025-11-23 08:40:38.908140981 +0000 UTC m=+0.090653069 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vendor=Red Hat, Inc., container_name=ovn_controller, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=) Nov 23 03:40:38 localhost systemd[1]: tmp-crun.HR5x65.mount: Deactivated successfully. Nov 23 03:40:38 localhost podman[92409]: 2025-11-23 08:40:38.956214194 +0000 UTC m=+0.134760700 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:40:39 localhost podman[92407]: 2025-11-23 08:40:39.016057802 +0000 UTC m=+0.200689165 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com) Nov 23 03:40:39 localhost podman[92409]: 2025-11-23 08:40:39.031309622 +0000 UTC m=+0.209856088 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Nov 23 03:40:39 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Deactivated successfully. Nov 23 03:40:39 localhost podman[92408]: 2025-11-23 08:40:39.083222094 +0000 UTC m=+0.265734242 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, config_id=tripleo_step4, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git) Nov 23 03:40:39 localhost podman[92408]: unhealthy Nov 23 03:40:39 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:40:39 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:40:39 localhost podman[92407]: 2025-11-23 08:40:39.257350618 +0000 UTC m=+0.441981991 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:40:39 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:40:39 localhost systemd[1]: tmp-crun.cSaJbi.mount: Deactivated successfully. Nov 23 03:41:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:41:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:41:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:41:01 localhost systemd[1]: tmp-crun.gFITTA.mount: Deactivated successfully. Nov 23 03:41:01 localhost podman[92612]: 2025-11-23 08:41:01.918580231 +0000 UTC m=+0.100100051 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, architecture=x86_64) Nov 23 03:41:01 localhost podman[92612]: 2025-11-23 08:41:01.961039101 +0000 UTC m=+0.142558871 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Nov 23 03:41:01 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:41:02 localhost podman[92613]: 2025-11-23 08:41:02.013225751 +0000 UTC m=+0.193943495 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step3) Nov 23 03:41:02 localhost podman[92614]: 2025-11-23 08:41:01.965061645 +0000 UTC m=+0.143749347 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Nov 23 03:41:02 localhost podman[92613]: 2025-11-23 08:41:02.026288375 +0000 UTC m=+0.207006099 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, container_name=iscsid, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid) Nov 23 03:41:02 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:41:02 localhost podman[92614]: 2025-11-23 08:41:02.049423899 +0000 UTC m=+0.228111611 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1) Nov 23 03:41:02 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:41:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:41:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:41:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:41:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:41:05 localhost podman[92676]: 2025-11-23 08:41:05.907842304 +0000 UTC m=+0.090652108 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1761123044, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, tcib_managed=true) Nov 23 03:41:05 localhost podman[92677]: 2025-11-23 08:41:05.965238486 +0000 UTC m=+0.146386849 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:41:06 localhost systemd[1]: tmp-crun.IVKQIy.mount: Deactivated successfully. Nov 23 03:41:06 localhost podman[92680]: 2025-11-23 08:41:06.014114493 +0000 UTC m=+0.187011812 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, architecture=x86_64, container_name=logrotate_crond, version=17.1.12, distribution-scope=public) Nov 23 03:41:06 localhost podman[92676]: 2025-11-23 08:41:06.019136029 +0000 UTC m=+0.201945803 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4) Nov 23 03:41:06 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:41:06 localhost podman[92680]: 2025-11-23 08:41:06.051198228 +0000 UTC m=+0.224095537 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:41:06 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:41:06 localhost podman[92678]: 2025-11-23 08:41:06.068890644 +0000 UTC m=+0.246924951 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=nova_migration_target, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:41:06 localhost podman[92677]: 2025-11-23 08:41:06.073790745 +0000 UTC m=+0.254939098 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:41:06 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:41:06 localhost podman[92678]: 2025-11-23 08:41:06.433457004 +0000 UTC m=+0.611491301 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, architecture=x86_64) Nov 23 03:41:06 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:41:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:41:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:41:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:41:09 localhost systemd[1]: tmp-crun.S3lvqM.mount: Deactivated successfully. Nov 23 03:41:09 localhost podman[92771]: 2025-11-23 08:41:09.910261183 +0000 UTC m=+0.094749035 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, name=rhosp17/openstack-ovn-controller, tcib_managed=true, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, distribution-scope=public, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:41:09 localhost podman[92770]: 2025-11-23 08:41:09.956932852 +0000 UTC m=+0.141794036 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:41:10 localhost podman[92770]: 2025-11-23 08:41:10.150706603 +0000 UTC m=+0.335567807 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:41:10 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:41:10 localhost podman[92772]: 2025-11-23 08:41:10.209873158 +0000 UTC m=+0.389372726 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc.) Nov 23 03:41:10 localhost podman[92771]: 2025-11-23 08:41:10.230428412 +0000 UTC m=+0.414916264 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.12, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:41:10 localhost podman[92771]: unhealthy Nov 23 03:41:10 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:41:10 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:41:10 localhost podman[92772]: 2025-11-23 08:41:10.252362799 +0000 UTC m=+0.431862357 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, version=17.1.12, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc.) Nov 23 03:41:10 localhost podman[92772]: unhealthy Nov 23 03:41:10 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:41:10 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:41:31 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:41:31 localhost recover_tripleo_nova_virtqemud[92841]: 62093 Nov 23 03:41:31 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:41:31 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:41:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:41:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:41:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:41:32 localhost podman[92842]: 2025-11-23 08:41:32.903012746 +0000 UTC m=+0.090884355 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vendor=Red Hat, Inc., container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-collectd-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd) Nov 23 03:41:32 localhost systemd[1]: tmp-crun.9JilwV.mount: Deactivated successfully. Nov 23 03:41:32 localhost podman[92844]: 2025-11-23 08:41:32.958097547 +0000 UTC m=+0.139915879 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, container_name=nova_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-type=git) Nov 23 03:41:33 localhost podman[92843]: 2025-11-23 08:41:33.018312635 +0000 UTC m=+0.200188829 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid) Nov 23 03:41:33 localhost podman[92843]: 2025-11-23 08:41:33.028700485 +0000 UTC m=+0.210576729 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, tcib_managed=true, config_id=tripleo_step3, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc.) Nov 23 03:41:33 localhost podman[92844]: 2025-11-23 08:41:33.040924493 +0000 UTC m=+0.222742865 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:36:58Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, container_name=nova_compute, batch=17.1_20251118.1, managed_by=tripleo_ansible) Nov 23 03:41:33 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:41:33 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:41:33 localhost podman[92842]: 2025-11-23 08:41:33.072642531 +0000 UTC m=+0.260514140 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:41:33 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:41:33 localhost systemd[1]: tmp-crun.XYanZN.mount: Deactivated successfully. Nov 23 03:41:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:41:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:41:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:41:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:41:36 localhost systemd[1]: tmp-crun.YLDG2g.mount: Deactivated successfully. Nov 23 03:41:36 localhost podman[92907]: 2025-11-23 08:41:36.926249708 +0000 UTC m=+0.106986204 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, config_id=tripleo_step4) Nov 23 03:41:36 localhost podman[92907]: 2025-11-23 08:41:36.964608601 +0000 UTC m=+0.145345057 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, release=1761123044, vcs-type=git, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:41:36 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:41:37 localhost podman[92909]: 2025-11-23 08:41:36.9687939 +0000 UTC m=+0.143042645 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, version=17.1.12, container_name=nova_migration_target, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:41:37 localhost podman[92910]: 2025-11-23 08:41:37.019582797 +0000 UTC m=+0.190385606 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-cron, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vendor=Red Hat, Inc.) Nov 23 03:41:37 localhost podman[92908]: 2025-11-23 08:41:37.070029764 +0000 UTC m=+0.248752987 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Nov 23 03:41:37 localhost podman[92908]: 2025-11-23 08:41:37.104478927 +0000 UTC m=+0.283202150 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step4, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:41:37 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:41:37 localhost podman[92910]: 2025-11-23 08:41:37.155450869 +0000 UTC m=+0.326253678 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, container_name=logrotate_crond, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:41:37 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:41:37 localhost podman[92909]: 2025-11-23 08:41:37.344419971 +0000 UTC m=+0.518668756 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, release=1761123044, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container) Nov 23 03:41:37 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:41:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:41:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:41:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:41:40 localhost podman[93007]: 2025-11-23 08:41:40.890910112 +0000 UTC m=+0.076227424 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_metadata_agent, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:41:40 localhost podman[93007]: 2025-11-23 08:41:40.907431752 +0000 UTC m=+0.092749064 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z) Nov 23 03:41:40 localhost podman[93007]: unhealthy Nov 23 03:41:40 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:41:40 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:41:41 localhost podman[93005]: 2025-11-23 08:41:41.002917398 +0000 UTC m=+0.189392616 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 23 03:41:41 localhost podman[93006]: 2025-11-23 08:41:41.05484194 +0000 UTC m=+0.238063858 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:41:41 localhost podman[93006]: 2025-11-23 08:41:41.073257338 +0000 UTC m=+0.256479236 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, release=1761123044, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:41:41 localhost podman[93006]: unhealthy Nov 23 03:41:41 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:41:41 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:41:41 localhost podman[93005]: 2025-11-23 08:41:41.220357067 +0000 UTC m=+0.406832275 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1761123044) Nov 23 03:41:41 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:42:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 03:42:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 5370 writes, 23K keys, 5370 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5370 writes, 735 syncs, 7.31 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 03:42:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:42:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:42:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:42:03 localhost systemd[1]: tmp-crun.4mQtay.mount: Deactivated successfully. Nov 23 03:42:03 localhost podman[93154]: 2025-11-23 08:42:03.915831508 +0000 UTC m=+0.097297584 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, batch=17.1_20251118.1) Nov 23 03:42:03 localhost podman[93153]: 2025-11-23 08:42:03.961210668 +0000 UTC m=+0.142625203 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, container_name=iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.12, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Nov 23 03:42:03 localhost podman[93153]: 2025-11-23 08:42:03.975487658 +0000 UTC m=+0.156902193 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, container_name=iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, vcs-type=git, architecture=x86_64) Nov 23 03:42:03 localhost podman[93154]: 2025-11-23 08:42:03.975708455 +0000 UTC m=+0.157174551 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Nov 23 03:42:03 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:42:04 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:42:04 localhost podman[93152]: 2025-11-23 08:42:04.122207566 +0000 UTC m=+0.303265850 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Nov 23 03:42:04 localhost podman[93152]: 2025-11-23 08:42:04.136302041 +0000 UTC m=+0.317360365 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com) Nov 23 03:42:04 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:42:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 03:42:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 5205 writes, 23K keys, 5205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5205 writes, 665 syncs, 7.83 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 03:42:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:42:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:42:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:42:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:42:07 localhost podman[93216]: 2025-11-23 08:42:07.879924895 +0000 UTC m=+0.063227163 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z) Nov 23 03:42:07 localhost systemd[1]: tmp-crun.XfTKS9.mount: Deactivated successfully. Nov 23 03:42:07 localhost podman[93214]: 2025-11-23 08:42:07.946173219 +0000 UTC m=+0.131742616 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, version=17.1.12, distribution-scope=public, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible) Nov 23 03:42:07 localhost podman[93215]: 2025-11-23 08:42:07.9076335 +0000 UTC m=+0.090217015 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi) Nov 23 03:42:07 localhost podman[93214]: 2025-11-23 08:42:07.970346825 +0000 UTC m=+0.155916222 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64) Nov 23 03:42:07 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:42:07 localhost podman[93215]: 2025-11-23 08:42:07.993306383 +0000 UTC m=+0.175889928 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Nov 23 03:42:08 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:42:08 localhost podman[93217]: 2025-11-23 08:42:07.96077441 +0000 UTC m=+0.137561666 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, tcib_managed=true, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4) Nov 23 03:42:08 localhost podman[93217]: 2025-11-23 08:42:08.044562405 +0000 UTC m=+0.221349611 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-cron, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, architecture=x86_64, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:42:08 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:42:08 localhost podman[93216]: 2025-11-23 08:42:08.24629645 +0000 UTC m=+0.429598758 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64) Nov 23 03:42:08 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:42:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:42:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:42:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:42:11 localhost podman[93306]: 2025-11-23 08:42:11.900058361 +0000 UTC m=+0.084797748 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr) Nov 23 03:42:11 localhost systemd[1]: tmp-crun.8j3TZt.mount: Deactivated successfully. Nov 23 03:42:12 localhost podman[93308]: 2025-11-23 08:42:12.013264794 +0000 UTC m=+0.191080408 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Nov 23 03:42:12 localhost podman[93307]: 2025-11-23 08:42:11.979316766 +0000 UTC m=+0.160589736 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 23 03:42:12 localhost podman[93308]: 2025-11-23 08:42:12.055333602 +0000 UTC m=+0.233149196 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=) Nov 23 03:42:12 localhost podman[93308]: unhealthy Nov 23 03:42:12 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:42:12 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:42:12 localhost podman[93307]: 2025-11-23 08:42:12.113834947 +0000 UTC m=+0.295107917 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, architecture=x86_64, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12) Nov 23 03:42:12 localhost podman[93307]: unhealthy Nov 23 03:42:12 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:42:12 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:42:12 localhost podman[93306]: 2025-11-23 08:42:12.129605524 +0000 UTC m=+0.314344921 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, release=1761123044, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=metrics_qdr, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:42:12 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:42:12 localhost systemd[1]: tmp-crun.RzSHGk.mount: Deactivated successfully. Nov 23 03:42:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:42:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:42:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:42:34 localhost podman[93374]: 2025-11-23 08:42:34.883169128 +0000 UTC m=+0.064924015 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, config_id=tripleo_step3, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:42:34 localhost podman[93374]: 2025-11-23 08:42:34.950058522 +0000 UTC m=+0.131813369 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, release=1761123044, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4) Nov 23 03:42:34 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:42:34 localhost podman[93373]: 2025-11-23 08:42:34.964355663 +0000 UTC m=+0.150693451 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, tcib_managed=true, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:42:34 localhost podman[93373]: 2025-11-23 08:42:34.976348603 +0000 UTC m=+0.162686391 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12) Nov 23 03:42:34 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:42:35 localhost podman[93375]: 2025-11-23 08:42:34.930519889 +0000 UTC m=+0.109868801 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5) Nov 23 03:42:35 localhost podman[93375]: 2025-11-23 08:42:35.066420973 +0000 UTC m=+0.245769885 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step5, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:42:35 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:42:35 localhost systemd[1]: tmp-crun.lNRDkH.mount: Deactivated successfully. Nov 23 03:42:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:42:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:42:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:42:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:42:38 localhost podman[93438]: 2025-11-23 08:42:38.901499788 +0000 UTC m=+0.084523610 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, url=https://www.redhat.com, container_name=ceilometer_agent_compute) Nov 23 03:42:38 localhost podman[93438]: 2025-11-23 08:42:38.924342593 +0000 UTC m=+0.107366495 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, distribution-scope=public) Nov 23 03:42:38 localhost systemd[1]: tmp-crun.nd4lzC.mount: Deactivated successfully. Nov 23 03:42:38 localhost podman[93441]: 2025-11-23 08:42:38.963224843 +0000 UTC m=+0.137850346 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:42:38 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:42:39 localhost podman[93441]: 2025-11-23 08:42:39.000379809 +0000 UTC m=+0.175005322 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, release=1761123044, container_name=logrotate_crond, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Nov 23 03:42:39 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:42:39 localhost podman[93439]: 2025-11-23 08:42:39.066906492 +0000 UTC m=+0.246867189 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi) Nov 23 03:42:39 localhost podman[93440]: 2025-11-23 08:42:39.120352102 +0000 UTC m=+0.296214412 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, release=1761123044, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1) Nov 23 03:42:39 localhost podman[93439]: 2025-11-23 08:42:39.144370802 +0000 UTC m=+0.324331499 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64) Nov 23 03:42:39 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:42:39 localhost podman[93440]: 2025-11-23 08:42:39.5062786 +0000 UTC m=+0.682140850 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.buildah.version=1.41.4) Nov 23 03:42:39 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:42:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:42:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:42:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:42:42 localhost podman[93538]: 2025-11-23 08:42:42.887221312 +0000 UTC m=+0.071913620 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:42:42 localhost podman[93538]: 2025-11-23 08:42:42.92637678 +0000 UTC m=+0.111069118 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn) Nov 23 03:42:42 localhost podman[93538]: unhealthy Nov 23 03:42:42 localhost systemd[1]: tmp-crun.VKg30i.mount: Deactivated successfully. Nov 23 03:42:42 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:42:42 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:42:42 localhost podman[93537]: 2025-11-23 08:42:42.945885792 +0000 UTC m=+0.131670744 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 23 03:42:42 localhost podman[93537]: 2025-11-23 08:42:42.984592377 +0000 UTC m=+0.170377339 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step4) Nov 23 03:42:42 localhost podman[93537]: unhealthy Nov 23 03:42:43 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:42:43 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:42:43 localhost podman[93536]: 2025-11-23 08:42:43.005400049 +0000 UTC m=+0.193264745 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, distribution-scope=public) Nov 23 03:42:43 localhost podman[93536]: 2025-11-23 08:42:43.19338777 +0000 UTC m=+0.381252466 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, vendor=Red Hat, Inc.) Nov 23 03:42:43 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:43:04 localhost podman[93735]: Nov 23 03:43:04 localhost podman[93735]: 2025-11-23 08:43:04.704576575 +0000 UTC m=+0.080013870 container create b9924eb18e8e17782f0bd10f77729a099121d31be08dabe027182cffe2ad5602 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_blackwell, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.component=rhceph-container, name=rhceph, io.buildah.version=1.33.12, release=553, ceph=True, io.openshift.expose-services=, RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public) Nov 23 03:43:04 localhost systemd[1]: Started libpod-conmon-b9924eb18e8e17782f0bd10f77729a099121d31be08dabe027182cffe2ad5602.scope. Nov 23 03:43:04 localhost systemd[1]: Started libcrun container. Nov 23 03:43:04 localhost podman[93735]: 2025-11-23 08:43:04.671159493 +0000 UTC m=+0.046596808 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 03:43:04 localhost podman[93735]: 2025-11-23 08:43:04.771690626 +0000 UTC m=+0.147127921 container init b9924eb18e8e17782f0bd10f77729a099121d31be08dabe027182cffe2ad5602 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_blackwell, build-date=2025-09-24T08:57:55, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_CLEAN=True, name=rhceph, io.openshift.tags=rhceph ceph, release=553, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, distribution-scope=public, maintainer=Guillaume Abrioux , version=7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 03:43:04 localhost podman[93735]: 2025-11-23 08:43:04.781771837 +0000 UTC m=+0.157209132 container start b9924eb18e8e17782f0bd10f77729a099121d31be08dabe027182cffe2ad5602 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_blackwell, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, release=553, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, ceph=True, vendor=Red Hat, Inc., name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, version=7) Nov 23 03:43:04 localhost podman[93735]: 2025-11-23 08:43:04.782286203 +0000 UTC m=+0.157723498 container attach b9924eb18e8e17782f0bd10f77729a099121d31be08dabe027182cffe2ad5602 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_blackwell, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, version=7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=553, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, name=rhceph, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_CLEAN=True) Nov 23 03:43:04 localhost crazy_blackwell[93750]: 167 167 Nov 23 03:43:04 localhost systemd[1]: libpod-b9924eb18e8e17782f0bd10f77729a099121d31be08dabe027182cffe2ad5602.scope: Deactivated successfully. Nov 23 03:43:04 localhost podman[93735]: 2025-11-23 08:43:04.785253805 +0000 UTC m=+0.160691110 container died b9924eb18e8e17782f0bd10f77729a099121d31be08dabe027182cffe2ad5602 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_blackwell, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, ceph=True, GIT_CLEAN=True, name=rhceph, io.openshift.tags=rhceph ceph, release=553, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_BRANCH=main, RELEASE=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 03:43:04 localhost podman[93755]: 2025-11-23 08:43:04.87874408 +0000 UTC m=+0.084284133 container remove b9924eb18e8e17782f0bd10f77729a099121d31be08dabe027182cffe2ad5602 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_blackwell, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, vcs-type=git, ceph=True, GIT_BRANCH=main, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, version=7, release=553, io.buildah.version=1.33.12) Nov 23 03:43:04 localhost systemd[1]: libpod-conmon-b9924eb18e8e17782f0bd10f77729a099121d31be08dabe027182cffe2ad5602.scope: Deactivated successfully. Nov 23 03:43:05 localhost podman[93778]: Nov 23 03:43:05 localhost podman[93778]: 2025-11-23 08:43:05.104372062 +0000 UTC m=+0.076537382 container create a4aefac70e88c83f31158dd29dde2a0744d297a78448a146a5a96029ba92a2da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_agnesi, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_BRANCH=main, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, distribution-scope=public, release=553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.component=rhceph-container) Nov 23 03:43:05 localhost systemd[1]: Started libpod-conmon-a4aefac70e88c83f31158dd29dde2a0744d297a78448a146a5a96029ba92a2da.scope. Nov 23 03:43:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:43:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:43:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:43:05 localhost systemd[1]: Started libcrun container. Nov 23 03:43:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a1f7f17da9f256a6f69575d9f5c3b43ab90686ca2ab75699df9ced232cdc7f/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 23 03:43:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a1f7f17da9f256a6f69575d9f5c3b43ab90686ca2ab75699df9ced232cdc7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 03:43:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a1f7f17da9f256a6f69575d9f5c3b43ab90686ca2ab75699df9ced232cdc7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 03:43:05 localhost podman[93778]: 2025-11-23 08:43:05.074355896 +0000 UTC m=+0.046521246 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 03:43:05 localhost podman[93778]: 2025-11-23 08:43:05.175026602 +0000 UTC m=+0.147191922 container init a4aefac70e88c83f31158dd29dde2a0744d297a78448a146a5a96029ba92a2da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_agnesi, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, vcs-type=git, architecture=x86_64, GIT_CLEAN=True, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, release=553, maintainer=Guillaume Abrioux ) Nov 23 03:43:05 localhost podman[93778]: 2025-11-23 08:43:05.1862876 +0000 UTC m=+0.158452930 container start a4aefac70e88c83f31158dd29dde2a0744d297a78448a146a5a96029ba92a2da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_agnesi, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, architecture=x86_64, version=7, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, ceph=True, release=553, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 03:43:05 localhost podman[93778]: 2025-11-23 08:43:05.186544468 +0000 UTC m=+0.158709838 container attach a4aefac70e88c83f31158dd29dde2a0744d297a78448a146a5a96029ba92a2da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_agnesi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, version=7, io.openshift.expose-services=, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, vcs-type=git) Nov 23 03:43:05 localhost podman[93797]: 2025-11-23 08:43:05.248683626 +0000 UTC m=+0.085755148 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step3, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:43:05 localhost podman[93795]: 2025-11-23 08:43:05.299345949 +0000 UTC m=+0.136226115 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:43:05 localhost podman[93795]: 2025-11-23 08:43:05.3113625 +0000 UTC m=+0.148242696 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:43:05 localhost podman[93797]: 2025-11-23 08:43:05.313890108 +0000 UTC m=+0.150961650 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, distribution-scope=public, container_name=iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:43:05 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:43:05 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:43:05 localhost podman[93798]: 2025-11-23 08:43:05.43776589 +0000 UTC m=+0.270023913 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_id=tripleo_step5, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc.) Nov 23 03:43:05 localhost podman[93798]: 2025-11-23 08:43:05.495525753 +0000 UTC m=+0.327783756 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step5, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4) Nov 23 03:43:05 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:43:05 localhost systemd[1]: var-lib-containers-storage-overlay-7280d4b184867bee7c02295bd124d8460bf94dd373c7119db9c1ab204ad583ef-merged.mount: Deactivated successfully. Nov 23 03:43:06 localhost priceless_agnesi[93794]: [ Nov 23 03:43:06 localhost priceless_agnesi[93794]: { Nov 23 03:43:06 localhost priceless_agnesi[93794]: "available": false, Nov 23 03:43:06 localhost priceless_agnesi[93794]: "ceph_device": false, Nov 23 03:43:06 localhost priceless_agnesi[93794]: "device_id": "QEMU_DVD-ROM_QM00001", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "lsm_data": {}, Nov 23 03:43:06 localhost priceless_agnesi[93794]: "lvs": [], Nov 23 03:43:06 localhost priceless_agnesi[93794]: "path": "/dev/sr0", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "rejected_reasons": [ Nov 23 03:43:06 localhost priceless_agnesi[93794]: "Has a FileSystem", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "Insufficient space (<5GB)" Nov 23 03:43:06 localhost priceless_agnesi[93794]: ], Nov 23 03:43:06 localhost priceless_agnesi[93794]: "sys_api": { Nov 23 03:43:06 localhost priceless_agnesi[93794]: "actuators": null, Nov 23 03:43:06 localhost priceless_agnesi[93794]: "device_nodes": "sr0", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "human_readable_size": "482.00 KB", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "id_bus": "ata", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "model": "QEMU DVD-ROM", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "nr_requests": "2", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "partitions": {}, Nov 23 03:43:06 localhost priceless_agnesi[93794]: "path": "/dev/sr0", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "removable": "1", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "rev": "2.5+", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "ro": "0", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "rotational": "1", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "sas_address": "", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "sas_device_handle": "", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "scheduler_mode": "mq-deadline", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "sectors": 0, Nov 23 03:43:06 localhost priceless_agnesi[93794]: "sectorsize": "2048", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "size": 493568.0, Nov 23 03:43:06 localhost priceless_agnesi[93794]: "support_discard": "0", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "type": "disk", Nov 23 03:43:06 localhost priceless_agnesi[93794]: "vendor": "QEMU" Nov 23 03:43:06 localhost priceless_agnesi[93794]: } Nov 23 03:43:06 localhost priceless_agnesi[93794]: } Nov 23 03:43:06 localhost priceless_agnesi[93794]: ] Nov 23 03:43:06 localhost systemd[1]: libpod-a4aefac70e88c83f31158dd29dde2a0744d297a78448a146a5a96029ba92a2da.scope: Deactivated successfully. Nov 23 03:43:06 localhost podman[93778]: 2025-11-23 08:43:06.160704739 +0000 UTC m=+1.132870129 container died a4aefac70e88c83f31158dd29dde2a0744d297a78448a146a5a96029ba92a2da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_agnesi, com.redhat.component=rhceph-container, architecture=x86_64, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, version=7, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, release=553, vcs-type=git) Nov 23 03:43:06 localhost systemd[1]: tmp-crun.5ohJxK.mount: Deactivated successfully. Nov 23 03:43:06 localhost systemd[1]: var-lib-containers-storage-overlay-d5a1f7f17da9f256a6f69575d9f5c3b43ab90686ca2ab75699df9ced232cdc7f-merged.mount: Deactivated successfully. Nov 23 03:43:06 localhost podman[95875]: 2025-11-23 08:43:06.238022715 +0000 UTC m=+0.071808507 container remove a4aefac70e88c83f31158dd29dde2a0744d297a78448a146a5a96029ba92a2da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_agnesi, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=Guillaume Abrioux , version=7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, build-date=2025-09-24T08:57:55, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, name=rhceph, vcs-type=git) Nov 23 03:43:06 localhost systemd[1]: libpod-conmon-a4aefac70e88c83f31158dd29dde2a0744d297a78448a146a5a96029ba92a2da.scope: Deactivated successfully. Nov 23 03:43:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:43:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:43:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:43:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:43:09 localhost systemd[1]: tmp-crun.NQBYH7.mount: Deactivated successfully. Nov 23 03:43:09 localhost podman[95906]: 2025-11-23 08:43:09.922489543 +0000 UTC m=+0.096663984 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:43:10 localhost podman[95904]: 2025-11-23 08:43:10.028791204 +0000 UTC m=+0.207160844 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:43:10 localhost podman[95904]: 2025-11-23 08:43:10.057316854 +0000 UTC m=+0.235686454 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible) Nov 23 03:43:10 localhost podman[95905]: 2025-11-23 08:43:10.064807835 +0000 UTC m=+0.241768462 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 23 03:43:10 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:43:10 localhost podman[95905]: 2025-11-23 08:43:10.103314224 +0000 UTC m=+0.280274911 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi) Nov 23 03:43:10 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:43:10 localhost podman[95907]: 2025-11-23 08:43:10.121855935 +0000 UTC m=+0.292579489 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:43:10 localhost podman[95907]: 2025-11-23 08:43:10.160405125 +0000 UTC m=+0.331128689 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12) Nov 23 03:43:10 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:43:10 localhost podman[95906]: 2025-11-23 08:43:10.313336124 +0000 UTC m=+0.487510565 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:43:10 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:43:10 localhost systemd[1]: tmp-crun.GqlXHy.mount: Deactivated successfully. Nov 23 03:43:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:43:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:43:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:43:13 localhost podman[96000]: 2025-11-23 08:43:13.894799553 +0000 UTC m=+0.075185001 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.12, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, batch=17.1_20251118.1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:43:13 localhost podman[96000]: 2025-11-23 08:43:13.911292282 +0000 UTC m=+0.091677690 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=) Nov 23 03:43:13 localhost podman[96000]: unhealthy Nov 23 03:43:13 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:43:13 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:43:13 localhost podman[95999]: 2025-11-23 08:43:13.945279711 +0000 UTC m=+0.128126035 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-ovn-controller) Nov 23 03:43:13 localhost podman[95999]: 2025-11-23 08:43:13.98220207 +0000 UTC m=+0.165048384 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 23 03:43:13 localhost podman[95999]: unhealthy Nov 23 03:43:13 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:43:13 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:43:13 localhost podman[95998]: 2025-11-23 08:43:13.999144073 +0000 UTC m=+0.184224856 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:43:14 localhost podman[95998]: 2025-11-23 08:43:14.187575697 +0000 UTC m=+0.372656540 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12) Nov 23 03:43:14 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:43:31 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:43:31 localhost recover_tripleo_nova_virtqemud[96069]: 62093 Nov 23 03:43:31 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:43:31 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:43:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:43:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:43:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:43:35 localhost systemd[1]: tmp-crun.TVP0Sk.mount: Deactivated successfully. Nov 23 03:43:35 localhost podman[96071]: 2025-11-23 08:43:35.905862512 +0000 UTC m=+0.090501114 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12) Nov 23 03:43:35 localhost podman[96070]: 2025-11-23 08:43:35.949132508 +0000 UTC m=+0.137579359 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:43:35 localhost podman[96071]: 2025-11-23 08:43:35.964044448 +0000 UTC m=+0.148683030 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:43:35 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:43:35 localhost podman[96070]: 2025-11-23 08:43:35.985351765 +0000 UTC m=+0.173798606 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, version=17.1.12, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:43:36 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:43:36 localhost podman[96072]: 2025-11-23 08:43:36.053465678 +0000 UTC m=+0.233298832 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:43:36 localhost podman[96072]: 2025-11-23 08:43:36.084426813 +0000 UTC m=+0.264260007 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:43:36 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:43:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:43:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:43:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:43:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:43:40 localhost podman[96138]: 2025-11-23 08:43:40.906100172 +0000 UTC m=+0.082173046 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, container_name=logrotate_crond, io.buildah.version=1.41.4) Nov 23 03:43:40 localhost podman[96138]: 2025-11-23 08:43:40.938826483 +0000 UTC m=+0.114899437 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, name=rhosp17/openstack-cron, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Nov 23 03:43:40 localhost systemd[1]: tmp-crun.E3QaRz.mount: Deactivated successfully. Nov 23 03:43:40 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:43:40 localhost podman[96135]: 2025-11-23 08:43:40.962038089 +0000 UTC m=+0.147065459 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:43:41 localhost podman[96136]: 2025-11-23 08:43:41.007712648 +0000 UTC m=+0.187139305 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, tcib_managed=true, vendor=Red Hat, Inc.) Nov 23 03:43:41 localhost podman[96135]: 2025-11-23 08:43:41.015615272 +0000 UTC m=+0.200642642 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64) Nov 23 03:43:41 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:43:41 localhost podman[96136]: 2025-11-23 08:43:41.039358425 +0000 UTC m=+0.218785052 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:43:41 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:43:41 localhost podman[96137]: 2025-11-23 08:43:41.103875866 +0000 UTC m=+0.282361135 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Nov 23 03:43:41 localhost podman[96137]: 2025-11-23 08:43:41.469372364 +0000 UTC m=+0.647857543 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, architecture=x86_64) Nov 23 03:43:41 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:43:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:43:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:43:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:43:44 localhost podman[96231]: 2025-11-23 08:43:44.896078567 +0000 UTC m=+0.081630210 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-11-18T23:34:05Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=) Nov 23 03:43:44 localhost podman[96231]: 2025-11-23 08:43:44.929326303 +0000 UTC m=+0.114877896 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible) Nov 23 03:43:44 localhost podman[96231]: unhealthy Nov 23 03:43:44 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:43:44 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:43:45 localhost podman[96232]: 2025-11-23 08:43:45.016128031 +0000 UTC m=+0.198783765 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:43:45 localhost podman[96230]: 2025-11-23 08:43:44.880359782 +0000 UTC m=+0.072042204 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:43:45 localhost podman[96230]: 2025-11-23 08:43:45.050892895 +0000 UTC m=+0.242575367 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 23 03:43:45 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:43:45 localhost podman[96232]: 2025-11-23 08:43:45.084029547 +0000 UTC m=+0.266685311 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Nov 23 03:43:45 localhost podman[96232]: unhealthy Nov 23 03:43:45 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:43:45 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:44:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:44:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:44:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:44:06 localhost podman[96294]: 2025-11-23 08:44:06.895610552 +0000 UTC m=+0.084663314 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step3, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:44:06 localhost podman[96294]: 2025-11-23 08:44:06.90365042 +0000 UTC m=+0.092703182 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:44:06 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:44:06 localhost systemd[1]: tmp-crun.qs8hL9.mount: Deactivated successfully. Nov 23 03:44:06 localhost podman[96295]: 2025-11-23 08:44:06.995573047 +0000 UTC m=+0.182836143 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container) Nov 23 03:44:07 localhost podman[96295]: 2025-11-23 08:44:07.033343133 +0000 UTC m=+0.220606249 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:44:07 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:44:07 localhost podman[96296]: 2025-11-23 08:44:07.047378656 +0000 UTC m=+0.231610799 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:44:07 localhost podman[96296]: 2025-11-23 08:44:07.073933075 +0000 UTC m=+0.258165218 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, version=17.1.12, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Nov 23 03:44:07 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:44:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:44:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:44:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:44:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:44:11 localhost podman[96480]: 2025-11-23 08:44:11.907612565 +0000 UTC m=+0.088745618 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-type=git, config_id=tripleo_step4, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=) Nov 23 03:44:11 localhost podman[96481]: 2025-11-23 08:44:11.953485532 +0000 UTC m=+0.133979086 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1) Nov 23 03:44:11 localhost podman[96480]: 2025-11-23 08:44:11.96543801 +0000 UTC m=+0.146571043 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:44:11 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:44:12 localhost podman[96481]: 2025-11-23 08:44:12.009331765 +0000 UTC m=+0.189825369 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 23 03:44:12 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:44:12 localhost podman[96482]: 2025-11-23 08:44:12.012948286 +0000 UTC m=+0.191033205 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1) Nov 23 03:44:12 localhost podman[96483]: 2025-11-23 08:44:12.066432887 +0000 UTC m=+0.241407991 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64) Nov 23 03:44:12 localhost podman[96483]: 2025-11-23 08:44:12.149274433 +0000 UTC m=+0.324249497 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible) Nov 23 03:44:12 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:44:12 localhost podman[96482]: 2025-11-23 08:44:12.402479457 +0000 UTC m=+0.580564366 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, container_name=nova_migration_target, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, release=1761123044, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:44:12 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:44:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:44:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:44:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:44:15 localhost podman[96577]: 2025-11-23 08:44:15.892245646 +0000 UTC m=+0.079434992 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, container_name=ovn_controller, io.buildah.version=1.41.4) Nov 23 03:44:15 localhost podman[96577]: 2025-11-23 08:44:15.935348526 +0000 UTC m=+0.122537892 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4) Nov 23 03:44:15 localhost podman[96577]: unhealthy Nov 23 03:44:15 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:44:15 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:44:15 localhost podman[96576]: 2025-11-23 08:44:15.961270646 +0000 UTC m=+0.149818244 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, batch=17.1_20251118.1, container_name=metrics_qdr, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step1, architecture=x86_64, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 23 03:44:16 localhost podman[96578]: 2025-11-23 08:44:16.043315658 +0000 UTC m=+0.226232442 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, container_name=ovn_metadata_agent, architecture=x86_64, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Nov 23 03:44:16 localhost podman[96578]: 2025-11-23 08:44:16.058431274 +0000 UTC m=+0.241348138 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git) Nov 23 03:44:16 localhost podman[96578]: unhealthy Nov 23 03:44:16 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:44:16 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:44:16 localhost podman[96576]: 2025-11-23 08:44:16.203675317 +0000 UTC m=+0.392222905 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.) Nov 23 03:44:16 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:44:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:44:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:44:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:44:37 localhost systemd[1]: tmp-crun.5hUJGe.mount: Deactivated successfully. Nov 23 03:44:37 localhost podman[96643]: 2025-11-23 08:44:37.909104237 +0000 UTC m=+0.091457414 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true) Nov 23 03:44:37 localhost podman[96645]: 2025-11-23 08:44:37.931193338 +0000 UTC m=+0.104993190 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z) Nov 23 03:44:37 localhost podman[96643]: 2025-11-23 08:44:37.953047973 +0000 UTC m=+0.135401170 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=1761123044) Nov 23 03:44:37 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:44:37 localhost podman[96645]: 2025-11-23 08:44:37.985569176 +0000 UTC m=+0.159368978 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, release=1761123044, config_id=tripleo_step5, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, container_name=nova_compute) Nov 23 03:44:38 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:44:38 localhost podman[96644]: 2025-11-23 08:44:38.009283608 +0000 UTC m=+0.184431172 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid) Nov 23 03:44:38 localhost podman[96644]: 2025-11-23 08:44:38.018699478 +0000 UTC m=+0.193847072 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc.) Nov 23 03:44:38 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:44:38 localhost systemd[1]: tmp-crun.AqFh2E.mount: Deactivated successfully. Nov 23 03:44:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:44:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:44:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:44:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:44:42 localhost podman[96709]: 2025-11-23 08:44:42.903303801 +0000 UTC m=+0.085348216 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git) Nov 23 03:44:42 localhost podman[96709]: 2025-11-23 08:44:42.938649131 +0000 UTC m=+0.120693576 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4) Nov 23 03:44:42 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:44:43 localhost podman[96711]: 2025-11-23 08:44:43.024355696 +0000 UTC m=+0.200475567 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com) Nov 23 03:44:43 localhost podman[96712]: 2025-11-23 08:44:43.070678216 +0000 UTC m=+0.243576618 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z) Nov 23 03:44:43 localhost podman[96712]: 2025-11-23 08:44:43.082343056 +0000 UTC m=+0.255241428 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, container_name=logrotate_crond, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20251118.1) Nov 23 03:44:43 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:44:43 localhost podman[96710]: 2025-11-23 08:44:43.165289615 +0000 UTC m=+0.344184292 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc.) Nov 23 03:44:43 localhost podman[96710]: 2025-11-23 08:44:43.223347597 +0000 UTC m=+0.402242294 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:44:43 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:44:43 localhost podman[96711]: 2025-11-23 08:44:43.421869203 +0000 UTC m=+0.597989064 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:44:43 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:44:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:44:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:44:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:44:46 localhost systemd[1]: tmp-crun.7gPewi.mount: Deactivated successfully. Nov 23 03:44:46 localhost podman[96806]: 2025-11-23 08:44:46.914748778 +0000 UTC m=+0.097162699 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1) Nov 23 03:44:46 localhost podman[96808]: 2025-11-23 08:44:46.966126574 +0000 UTC m=+0.140494287 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public) Nov 23 03:44:47 localhost podman[96807]: 2025-11-23 08:44:47.001962 +0000 UTC m=+0.179952734 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-11-18T23:34:05Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64) Nov 23 03:44:47 localhost podman[96808]: 2025-11-23 08:44:47.009392289 +0000 UTC m=+0.183760012 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-19T00:14:25Z, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 23 03:44:47 localhost podman[96808]: unhealthy Nov 23 03:44:47 localhost podman[96807]: 2025-11-23 08:44:47.022567175 +0000 UTC m=+0.200557929 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=) Nov 23 03:44:47 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:44:47 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:44:47 localhost podman[96807]: unhealthy Nov 23 03:44:47 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:44:47 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:44:47 localhost podman[96806]: 2025-11-23 08:44:47.144442346 +0000 UTC m=+0.326856317 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container) Nov 23 03:44:47 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:44:47 localhost systemd[1]: tmp-crun.m2lQIX.mount: Deactivated successfully. Nov 23 03:45:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:45:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:45:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:45:08 localhost podman[96868]: 2025-11-23 08:45:08.906118771 +0000 UTC m=+0.090883096 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4) Nov 23 03:45:08 localhost podman[96869]: 2025-11-23 08:45:08.952231593 +0000 UTC m=+0.136060609 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, vendor=Red Hat, Inc., container_name=iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:45:08 localhost podman[96869]: 2025-11-23 08:45:08.961103928 +0000 UTC m=+0.144932954 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, version=17.1.12, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 23 03:45:08 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:45:09 localhost podman[96870]: 2025-11-23 08:45:09.053923392 +0000 UTC m=+0.233876839 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Nov 23 03:45:09 localhost podman[96868]: 2025-11-23 08:45:09.07039248 +0000 UTC m=+0.255156865 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, batch=17.1_20251118.1, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc.) Nov 23 03:45:09 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:45:09 localhost podman[96870]: 2025-11-23 08:45:09.095355401 +0000 UTC m=+0.275308828 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.openshift.expose-services=, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.) Nov 23 03:45:09 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:45:09 localhost systemd[1]: tmp-crun.MeojD7.mount: Deactivated successfully. Nov 23 03:45:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:45:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:45:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:45:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:45:13 localhost podman[97007]: 2025-11-23 08:45:13.893644678 +0000 UTC m=+0.077093600 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, container_name=nova_migration_target, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z) Nov 23 03:45:13 localhost systemd[1]: tmp-crun.UuMS6g.mount: Deactivated successfully. Nov 23 03:45:13 localhost podman[97005]: 2025-11-23 08:45:13.971132369 +0000 UTC m=+0.155581212 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64) Nov 23 03:45:14 localhost podman[97005]: 2025-11-23 08:45:14.004873991 +0000 UTC m=+0.189322874 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:45:14 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:45:14 localhost podman[97006]: 2025-11-23 08:45:14.05606197 +0000 UTC m=+0.239509252 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2025-11-19T00:12:45Z, distribution-scope=public, maintainer=OpenStack TripleO Team) Nov 23 03:45:14 localhost podman[97008]: 2025-11-23 08:45:14.008799671 +0000 UTC m=+0.186064702 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, container_name=logrotate_crond, release=1761123044, version=17.1.12, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=) Nov 23 03:45:14 localhost podman[97008]: 2025-11-23 08:45:14.091464193 +0000 UTC m=+0.268729194 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, version=17.1.12, vcs-type=git, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, container_name=logrotate_crond, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:45:14 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:45:14 localhost podman[97006]: 2025-11-23 08:45:14.110412388 +0000 UTC m=+0.293859690 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container) Nov 23 03:45:14 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:45:14 localhost podman[97007]: 2025-11-23 08:45:14.27124441 +0000 UTC m=+0.454693352 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vcs-type=git) Nov 23 03:45:14 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:45:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:45:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:45:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:45:17 localhost podman[97097]: 2025-11-23 08:45:17.891176866 +0000 UTC m=+0.077281235 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, config_id=tripleo_step4, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 23 03:45:17 localhost podman[97098]: 2025-11-23 08:45:17.951057955 +0000 UTC m=+0.134179382 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.12, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 23 03:45:17 localhost podman[97097]: 2025-11-23 08:45:17.958718411 +0000 UTC m=+0.144822810 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, container_name=ovn_controller) Nov 23 03:45:17 localhost podman[97097]: unhealthy Nov 23 03:45:17 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:45:17 localhost podman[97098]: 2025-11-23 08:45:17.973418415 +0000 UTC m=+0.156539812 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Nov 23 03:45:17 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:45:17 localhost podman[97098]: unhealthy Nov 23 03:45:17 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:45:17 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:45:17 localhost podman[97096]: 2025-11-23 08:45:17.996731614 +0000 UTC m=+0.185871927 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, release=1761123044, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:45:18 localhost podman[97096]: 2025-11-23 08:45:18.18552636 +0000 UTC m=+0.374666613 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=metrics_qdr, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:45:18 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:45:31 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:45:31 localhost recover_tripleo_nova_virtqemud[97165]: 62093 Nov 23 03:45:31 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:45:31 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:45:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:45:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:45:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:45:39 localhost podman[97167]: 2025-11-23 08:45:39.895737829 +0000 UTC m=+0.081772264 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3) Nov 23 03:45:39 localhost podman[97167]: 2025-11-23 08:45:39.931373969 +0000 UTC m=+0.117408344 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4) Nov 23 03:45:39 localhost systemd[1]: tmp-crun.ccXWKe.mount: Deactivated successfully. Nov 23 03:45:39 localhost podman[97166]: 2025-11-23 08:45:39.950473679 +0000 UTC m=+0.139698423 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Nov 23 03:45:39 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:45:39 localhost podman[97166]: 2025-11-23 08:45:39.964425689 +0000 UTC m=+0.153650343 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3) Nov 23 03:45:39 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:45:40 localhost podman[97168]: 2025-11-23 08:45:40.052424794 +0000 UTC m=+0.234731944 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, batch=17.1_20251118.1, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:45:40 localhost podman[97168]: 2025-11-23 08:45:40.083681699 +0000 UTC m=+0.265988839 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z) Nov 23 03:45:40 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:45:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:45:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:45:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:45:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:45:44 localhost podman[97232]: 2025-11-23 08:45:44.891917964 +0000 UTC m=+0.073910492 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:45:44 localhost podman[97233]: 2025-11-23 08:45:44.947905372 +0000 UTC m=+0.128283370 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=nova_migration_target, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true) Nov 23 03:45:45 localhost podman[97232]: 2025-11-23 08:45:45.0058571 +0000 UTC m=+0.187849688 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true) Nov 23 03:45:45 localhost systemd[1]: tmp-crun.68s48e.mount: Deactivated successfully. Nov 23 03:45:45 localhost podman[97234]: 2025-11-23 08:45:45.01462127 +0000 UTC m=+0.191912423 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-cron, tcib_managed=true, managed_by=tripleo_ansible) Nov 23 03:45:45 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:45:45 localhost podman[97231]: 2025-11-23 08:45:45.048117134 +0000 UTC m=+0.232955970 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:45:45 localhost podman[97231]: 2025-11-23 08:45:45.076370745 +0000 UTC m=+0.261209621 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, container_name=ceilometer_agent_compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute) Nov 23 03:45:45 localhost podman[97234]: 2025-11-23 08:45:45.076903093 +0000 UTC m=+0.254194256 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Nov 23 03:45:45 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:45:45 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:45:45 localhost podman[97233]: 2025-11-23 08:45:45.329405224 +0000 UTC m=+0.509783312 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64) Nov 23 03:45:45 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:45:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:45:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:45:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:45:48 localhost systemd[1]: tmp-crun.9F7ylp.mount: Deactivated successfully. Nov 23 03:45:48 localhost podman[97323]: 2025-11-23 08:45:48.904081874 +0000 UTC m=+0.087822172 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, container_name=ovn_controller, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, config_id=tripleo_step4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Nov 23 03:45:48 localhost podman[97323]: 2025-11-23 08:45:48.9451293 +0000 UTC m=+0.128869548 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.buildah.version=1.41.4, container_name=ovn_controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, batch=17.1_20251118.1) Nov 23 03:45:48 localhost podman[97323]: unhealthy Nov 23 03:45:48 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:45:48 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:45:49 localhost podman[97322]: 2025-11-23 08:45:48.947965407 +0000 UTC m=+0.133302934 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, architecture=x86_64, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team) Nov 23 03:45:49 localhost podman[97324]: 2025-11-23 08:45:49.006788703 +0000 UTC m=+0.186982581 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:45:49 localhost podman[97324]: 2025-11-23 08:45:49.049368597 +0000 UTC m=+0.229562445 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, release=1761123044) Nov 23 03:45:49 localhost podman[97324]: unhealthy Nov 23 03:45:49 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:45:49 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:45:49 localhost podman[97322]: 2025-11-23 08:45:49.144962457 +0000 UTC m=+0.330300024 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, version=17.1.12) Nov 23 03:45:49 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:46:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:46:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:46:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:46:10 localhost podman[97385]: 2025-11-23 08:46:10.899695308 +0000 UTC m=+0.085059126 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc.) Nov 23 03:46:10 localhost podman[97385]: 2025-11-23 08:46:10.934915196 +0000 UTC m=+0.120278974 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd) Nov 23 03:46:10 localhost podman[97387]: 2025-11-23 08:46:10.947310918 +0000 UTC m=+0.125934117 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute) Nov 23 03:46:10 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:46:11 localhost podman[97387]: 2025-11-23 08:46:11.00340734 +0000 UTC m=+0.182030489 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Nov 23 03:46:11 localhost podman[97386]: 2025-11-23 08:46:11.00082369 +0000 UTC m=+0.180868513 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid) Nov 23 03:46:11 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:46:11 localhost podman[97386]: 2025-11-23 08:46:11.083334096 +0000 UTC m=+0.263378919 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, vendor=Red Hat, Inc.) Nov 23 03:46:11 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:46:12 localhost podman[97549]: 2025-11-23 08:46:12.365208123 +0000 UTC m=+0.098930885 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, version=7, GIT_BRANCH=main, architecture=x86_64, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.component=rhceph-container, release=553, io.openshift.expose-services=, ceph=True, io.buildah.version=1.33.12, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 03:46:12 localhost podman[97549]: 2025-11-23 08:46:12.469313935 +0000 UTC m=+0.203036697 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, RELEASE=main, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph) Nov 23 03:46:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:46:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:46:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:46:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:46:15 localhost podman[97696]: 2025-11-23 08:46:15.91714559 +0000 UTC m=+0.097253323 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com) Nov 23 03:46:15 localhost systemd[1]: tmp-crun.kILXhU.mount: Deactivated successfully. Nov 23 03:46:15 localhost podman[97697]: 2025-11-23 08:46:15.967108891 +0000 UTC m=+0.145497470 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=nova_migration_target, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 23 03:46:15 localhost podman[97696]: 2025-11-23 08:46:15.974548061 +0000 UTC m=+0.154655763 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1761123044, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:46:15 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:46:16 localhost podman[97698]: 2025-11-23 08:46:16.036382409 +0000 UTC m=+0.211724474 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond) Nov 23 03:46:16 localhost podman[97695]: 2025-11-23 08:46:16.060122053 +0000 UTC m=+0.241293838 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z) Nov 23 03:46:16 localhost podman[97695]: 2025-11-23 08:46:16.114355226 +0000 UTC m=+0.295527031 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, version=17.1.12, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Nov 23 03:46:16 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:46:16 localhost podman[97698]: 2025-11-23 08:46:16.171577011 +0000 UTC m=+0.346919066 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, distribution-scope=public, vcs-type=git, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.expose-services=) Nov 23 03:46:16 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:46:16 localhost podman[97697]: 2025-11-23 08:46:16.359977036 +0000 UTC m=+0.538365585 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc.) Nov 23 03:46:16 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:46:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:46:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:46:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:46:19 localhost podman[97792]: 2025-11-23 08:46:19.906059432 +0000 UTC m=+0.086557651 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-19T00:14:25Z) Nov 23 03:46:19 localhost podman[97792]: 2025-11-23 08:46:19.946191791 +0000 UTC m=+0.126690010 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=ovn_metadata_agent, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-type=git) Nov 23 03:46:19 localhost podman[97792]: unhealthy Nov 23 03:46:19 localhost systemd[1]: tmp-crun.vpuF8Q.mount: Deactivated successfully. Nov 23 03:46:19 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:46:19 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:46:19 localhost podman[97791]: 2025-11-23 08:46:19.971357718 +0000 UTC m=+0.153399985 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1761123044, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:46:20 localhost podman[97790]: 2025-11-23 08:46:20.017825301 +0000 UTC m=+0.200526698 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, config_id=tripleo_step1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, distribution-scope=public) Nov 23 03:46:20 localhost podman[97791]: 2025-11-23 08:46:20.036435176 +0000 UTC m=+0.218477433 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, build-date=2025-11-18T23:34:05Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044) Nov 23 03:46:20 localhost podman[97791]: unhealthy Nov 23 03:46:20 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:46:20 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:46:20 localhost podman[97790]: 2025-11-23 08:46:20.205888414 +0000 UTC m=+0.388589821 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, release=1761123044) Nov 23 03:46:20 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:46:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:46:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:46:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:46:41 localhost systemd[1]: tmp-crun.ns4zUX.mount: Deactivated successfully. Nov 23 03:46:41 localhost podman[97859]: 2025-11-23 08:46:41.951576646 +0000 UTC m=+0.140871058 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, distribution-scope=public, config_id=tripleo_step3, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:46:41 localhost podman[97859]: 2025-11-23 08:46:41.963443982 +0000 UTC m=+0.152738384 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, container_name=collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:46:41 localhost podman[97860]: 2025-11-23 08:46:41.912230682 +0000 UTC m=+0.099192362 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:46:41 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:46:42 localhost podman[97860]: 2025-11-23 08:46:42.046500585 +0000 UTC m=+0.233462255 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, version=17.1.12, container_name=iscsid, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Nov 23 03:46:42 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:46:42 localhost podman[97861]: 2025-11-23 08:46:42.050504799 +0000 UTC m=+0.233364022 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:46:42 localhost podman[97861]: 2025-11-23 08:46:42.145355256 +0000 UTC m=+0.328214459 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=nova_compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, version=17.1.12, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=) Nov 23 03:46:42 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:46:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:46:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:46:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:46:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:46:46 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:46:46 localhost recover_tripleo_nova_virtqemud[97946]: 62093 Nov 23 03:46:46 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:46:46 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:46:46 localhost podman[97928]: 2025-11-23 08:46:46.908385406 +0000 UTC m=+0.085864090 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Nov 23 03:46:46 localhost podman[97928]: 2025-11-23 08:46:46.945253103 +0000 UTC m=+0.122731777 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp17/openstack-cron) Nov 23 03:46:46 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:46:47 localhost systemd[1]: tmp-crun.u7Vj1V.mount: Deactivated successfully. Nov 23 03:46:47 localhost podman[97926]: 2025-11-23 08:46:47.024299283 +0000 UTC m=+0.204794740 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com) Nov 23 03:46:47 localhost podman[97925]: 2025-11-23 08:46:47.088769463 +0000 UTC m=+0.274665017 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, release=1761123044, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute) Nov 23 03:46:47 localhost podman[97926]: 2025-11-23 08:46:47.109700718 +0000 UTC m=+0.290196185 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:46:47 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:46:47 localhost podman[97927]: 2025-11-23 08:46:47.126233019 +0000 UTC m=+0.306041975 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Nov 23 03:46:47 localhost podman[97925]: 2025-11-23 08:46:47.141422388 +0000 UTC m=+0.327317962 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, vcs-type=git, distribution-scope=public) Nov 23 03:46:47 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:46:47 localhost podman[97927]: 2025-11-23 08:46:47.489479968 +0000 UTC m=+0.669288964 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Nov 23 03:46:47 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:46:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:46:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:46:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:46:50 localhost podman[98027]: 2025-11-23 08:46:50.896482264 +0000 UTC m=+0.088266026 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12) Nov 23 03:46:50 localhost systemd[1]: tmp-crun.aLb30c.mount: Deactivated successfully. Nov 23 03:46:50 localhost podman[98028]: 2025-11-23 08:46:50.960942563 +0000 UTC m=+0.146377068 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, release=1761123044, tcib_managed=true, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:46:51 localhost podman[98028]: 2025-11-23 08:46:51.007449468 +0000 UTC m=+0.192883953 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, batch=17.1_20251118.1, release=1761123044) Nov 23 03:46:51 localhost podman[98028]: unhealthy Nov 23 03:46:51 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:46:51 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:46:51 localhost podman[98029]: 2025-11-23 08:46:51.008699896 +0000 UTC m=+0.190821089 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.12, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible) Nov 23 03:46:51 localhost podman[98027]: 2025-11-23 08:46:51.09241758 +0000 UTC m=+0.284201342 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Nov 23 03:46:51 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:46:51 localhost podman[98029]: 2025-11-23 08:46:51.143237748 +0000 UTC m=+0.325358961 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team) Nov 23 03:46:51 localhost podman[98029]: unhealthy Nov 23 03:46:51 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:46:51 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:47:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:47:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:47:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:47:12 localhost systemd[1]: tmp-crun.q4l05m.mount: Deactivated successfully. Nov 23 03:47:12 localhost podman[98097]: 2025-11-23 08:47:12.920356321 +0000 UTC m=+0.103390081 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step3, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Nov 23 03:47:12 localhost systemd[1]: tmp-crun.5rI3ii.mount: Deactivated successfully. Nov 23 03:47:12 localhost podman[98097]: 2025-11-23 08:47:12.961401107 +0000 UTC m=+0.144434887 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, tcib_managed=true, architecture=x86_64, version=17.1.12, config_id=tripleo_step3, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container) Nov 23 03:47:12 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:47:13 localhost podman[98099]: 2025-11-23 08:47:12.96503042 +0000 UTC m=+0.143889732 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step5) Nov 23 03:47:13 localhost podman[98098]: 2025-11-23 08:47:13.019944304 +0000 UTC m=+0.199795927 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Nov 23 03:47:13 localhost podman[98098]: 2025-11-23 08:47:13.033957326 +0000 UTC m=+0.213808959 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, version=17.1.12, release=1761123044, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64) Nov 23 03:47:13 localhost podman[98099]: 2025-11-23 08:47:13.044717058 +0000 UTC m=+0.223576430 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Nov 23 03:47:13 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:47:13 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:47:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:47:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:47:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:47:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:47:17 localhost systemd[1]: tmp-crun.DEeNI1.mount: Deactivated successfully. Nov 23 03:47:17 localhost podman[98239]: 2025-11-23 08:47:17.913467821 +0000 UTC m=+0.091800654 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, container_name=nova_migration_target, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4) Nov 23 03:47:17 localhost podman[98238]: 2025-11-23 08:47:17.945966424 +0000 UTC m=+0.127381632 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4) Nov 23 03:47:17 localhost podman[98238]: 2025-11-23 08:47:17.965167277 +0000 UTC m=+0.146582485 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:47:17 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:47:17 localhost podman[98237]: 2025-11-23 08:47:17.881406232 +0000 UTC m=+0.067755153 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, architecture=x86_64, release=1761123044, build-date=2025-11-19T00:11:48Z) Nov 23 03:47:18 localhost podman[98237]: 2025-11-23 08:47:18.013314482 +0000 UTC m=+0.199663433 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, release=1761123044, url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:47:18 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:47:18 localhost podman[98244]: 2025-11-23 08:47:17.916136003 +0000 UTC m=+0.091770113 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team) Nov 23 03:47:18 localhost podman[98244]: 2025-11-23 08:47:18.09946905 +0000 UTC m=+0.275103190 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible) Nov 23 03:47:18 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:47:18 localhost podman[98239]: 2025-11-23 08:47:18.219601947 +0000 UTC m=+0.397934750 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 23 03:47:18 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:47:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:47:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:47:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:47:21 localhost podman[98330]: 2025-11-23 08:47:21.90127576 +0000 UTC m=+0.089951368 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, version=17.1.12, vcs-type=git, container_name=metrics_qdr, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1) Nov 23 03:47:21 localhost podman[98332]: 2025-11-23 08:47:21.963073887 +0000 UTC m=+0.144215772 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=ovn_metadata_agent) Nov 23 03:47:21 localhost podman[98331]: 2025-11-23 08:47:21.999102358 +0000 UTC m=+0.181785980 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, batch=17.1_20251118.1, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container) Nov 23 03:47:22 localhost podman[98332]: 2025-11-23 08:47:22.007661203 +0000 UTC m=+0.188803088 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git) Nov 23 03:47:22 localhost podman[98332]: unhealthy Nov 23 03:47:22 localhost podman[98331]: 2025-11-23 08:47:22.015210345 +0000 UTC m=+0.197893977 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Nov 23 03:47:22 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:47:22 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:47:22 localhost podman[98331]: unhealthy Nov 23 03:47:22 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:47:22 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:47:22 localhost podman[98330]: 2025-11-23 08:47:22.118334798 +0000 UTC m=+0.307010416 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:47:22 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:47:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:47:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:47:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:47:43 localhost podman[98397]: 2025-11-23 08:47:43.904651964 +0000 UTC m=+0.093927421 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_id=tripleo_step3, version=17.1.12, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, container_name=collectd) Nov 23 03:47:43 localhost podman[98397]: 2025-11-23 08:47:43.91846259 +0000 UTC m=+0.107738087 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z) Nov 23 03:47:43 localhost podman[98398]: 2025-11-23 08:47:43.95507356 +0000 UTC m=+0.139960081 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, architecture=x86_64, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-iscsid, release=1761123044, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.buildah.version=1.41.4) Nov 23 03:47:43 localhost podman[98398]: 2025-11-23 08:47:43.969629509 +0000 UTC m=+0.154516040 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, container_name=iscsid, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:47:43 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:47:44 localhost podman[98399]: 2025-11-23 08:47:44.019761266 +0000 UTC m=+0.198469026 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:47:44 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:47:44 localhost podman[98399]: 2025-11-23 08:47:44.124460036 +0000 UTC m=+0.303167836 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=nova_compute) Nov 23 03:47:44 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:47:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:47:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:47:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:47:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:47:48 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:47:48 localhost recover_tripleo_nova_virtqemud[98490]: 62093 Nov 23 03:47:48 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:47:48 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:47:48 localhost systemd[1]: tmp-crun.SdNobP.mount: Deactivated successfully. Nov 23 03:47:48 localhost podman[98465]: 2025-11-23 08:47:48.934299591 +0000 UTC m=+0.112394389 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:47:48 localhost podman[98465]: 2025-11-23 08:47:48.962334156 +0000 UTC m=+0.140428904 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:47:48 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:47:49 localhost systemd[1]: tmp-crun.o1xYxh.mount: Deactivated successfully. Nov 23 03:47:49 localhost podman[98468]: 2025-11-23 08:47:49.032061478 +0000 UTC m=+0.200400555 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:47:49 localhost podman[98467]: 2025-11-23 08:47:49.073696263 +0000 UTC m=+0.244878137 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:47:49 localhost podman[98466]: 2025-11-23 08:47:49.130092163 +0000 UTC m=+0.305072995 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, release=1761123044, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64) Nov 23 03:47:49 localhost podman[98468]: 2025-11-23 08:47:49.151789233 +0000 UTC m=+0.320128310 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, version=17.1.12, container_name=logrotate_crond) Nov 23 03:47:49 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:47:49 localhost podman[98466]: 2025-11-23 08:47:49.188419753 +0000 UTC m=+0.363400565 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true) Nov 23 03:47:49 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:47:49 localhost podman[98467]: 2025-11-23 08:47:49.443450943 +0000 UTC m=+0.614632847 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, container_name=nova_migration_target, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4) Nov 23 03:47:49 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:47:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:47:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:47:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:47:52 localhost systemd[1]: tmp-crun.6RK2cK.mount: Deactivated successfully. Nov 23 03:47:52 localhost podman[98565]: 2025-11-23 08:47:52.903468894 +0000 UTC m=+0.089887815 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container) Nov 23 03:47:52 localhost podman[98566]: 2025-11-23 08:47:52.959831813 +0000 UTC m=+0.140168356 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 23 03:47:52 localhost podman[98564]: 2025-11-23 08:47:52.929301211 +0000 UTC m=+0.115475745 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Nov 23 03:47:52 localhost podman[98565]: 2025-11-23 08:47:52.987314371 +0000 UTC m=+0.173733242 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, release=1761123044, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1) Nov 23 03:47:52 localhost podman[98565]: unhealthy Nov 23 03:47:52 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:47:52 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:47:52 localhost podman[98566]: 2025-11-23 08:47:52.998253169 +0000 UTC m=+0.178589742 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:47:53 localhost podman[98566]: unhealthy Nov 23 03:47:53 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:47:53 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:47:53 localhost podman[98564]: 2025-11-23 08:47:53.090449374 +0000 UTC m=+0.276623978 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, config_id=tripleo_step1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com) Nov 23 03:47:53 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:47:53 localhost systemd[1]: tmp-crun.IXP597.mount: Deactivated successfully. Nov 23 03:48:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:48:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:48:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:48:14 localhost systemd[1]: tmp-crun.yuvTyR.mount: Deactivated successfully. Nov 23 03:48:14 localhost podman[98631]: 2025-11-23 08:48:14.91306387 +0000 UTC m=+0.095157571 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., container_name=collectd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044) Nov 23 03:48:14 localhost podman[98631]: 2025-11-23 08:48:14.920936482 +0000 UTC m=+0.103030173 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=collectd, com.redhat.component=openstack-collectd-container) Nov 23 03:48:14 localhost podman[98632]: 2025-11-23 08:48:14.957540569 +0000 UTC m=+0.136859984 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, container_name=iscsid, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container) Nov 23 03:48:14 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:48:14 localhost podman[98632]: 2025-11-23 08:48:14.991577097 +0000 UTC m=+0.170896522 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team) Nov 23 03:48:15 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:48:15 localhost podman[98633]: 2025-11-23 08:48:15.062479099 +0000 UTC m=+0.236578924 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, architecture=x86_64, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vcs-type=git) Nov 23 03:48:15 localhost podman[98633]: 2025-11-23 08:48:15.091791322 +0000 UTC m=+0.265891157 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, config_id=tripleo_step5, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-nova-compute) Nov 23 03:48:15 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:48:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:48:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:48:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:48:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:48:19 localhost podman[98774]: 2025-11-23 08:48:19.91467247 +0000 UTC m=+0.087340950 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044) Nov 23 03:48:19 localhost podman[98773]: 2025-11-23 08:48:19.964085502 +0000 UTC m=+0.139014241 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:48:20 localhost podman[98775]: 2025-11-23 08:48:20.016663111 +0000 UTC m=+0.186874405 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team) Nov 23 03:48:20 localhost podman[98775]: 2025-11-23 08:48:20.026198754 +0000 UTC m=+0.196410048 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-cron, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Nov 23 03:48:20 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:48:20 localhost podman[98773]: 2025-11-23 08:48:20.045163728 +0000 UTC m=+0.220092497 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Nov 23 03:48:20 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:48:20 localhost podman[98772]: 2025-11-23 08:48:20.122786338 +0000 UTC m=+0.296633253 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-19T00:11:48Z, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.41.4, distribution-scope=public) Nov 23 03:48:20 localhost podman[98772]: 2025-11-23 08:48:20.152677577 +0000 UTC m=+0.326524502 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:48:20 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:48:20 localhost podman[98774]: 2025-11-23 08:48:20.291569014 +0000 UTC m=+0.464237484 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:48:20 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:48:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:48:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:48:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:48:23 localhost systemd[1]: tmp-crun.GXHDJl.mount: Deactivated successfully. Nov 23 03:48:23 localhost podman[98872]: 2025-11-23 08:48:23.885528519 +0000 UTC m=+0.073848894 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vcs-type=git, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12) Nov 23 03:48:23 localhost podman[98873]: 2025-11-23 08:48:23.902953466 +0000 UTC m=+0.085886975 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:48:23 localhost podman[98873]: 2025-11-23 08:48:23.914006396 +0000 UTC m=+0.096939935 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, container_name=ovn_controller, vcs-type=git, config_id=tripleo_step4, release=1761123044, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1) Nov 23 03:48:23 localhost podman[98873]: unhealthy Nov 23 03:48:23 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:48:23 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:48:24 localhost podman[98874]: 2025-11-23 08:48:24.003772279 +0000 UTC m=+0.184187041 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:48:24 localhost podman[98874]: 2025-11-23 08:48:24.043359508 +0000 UTC m=+0.223774280 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Nov 23 03:48:24 localhost podman[98874]: unhealthy Nov 23 03:48:24 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:48:24 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:48:24 localhost podman[98872]: 2025-11-23 08:48:24.1044678 +0000 UTC m=+0.292788215 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1761123044) Nov 23 03:48:24 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:48:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:48:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:48:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:48:45 localhost systemd[1]: tmp-crun.tIRun3.mount: Deactivated successfully. Nov 23 03:48:45 localhost podman[98938]: 2025-11-23 08:48:45.911073504 +0000 UTC m=+0.096395948 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, version=17.1.12, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=) Nov 23 03:48:45 localhost podman[98938]: 2025-11-23 08:48:45.920032479 +0000 UTC m=+0.105354903 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044) Nov 23 03:48:45 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:48:46 localhost podman[98940]: 2025-11-23 08:48:46.003781648 +0000 UTC m=+0.185307166 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=nova_compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, version=17.1.12) Nov 23 03:48:46 localhost podman[98939]: 2025-11-23 08:48:46.063905829 +0000 UTC m=+0.247114899 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-iscsid-container, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, tcib_managed=true) Nov 23 03:48:46 localhost podman[98939]: 2025-11-23 08:48:46.072229796 +0000 UTC m=+0.255438816 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, container_name=iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Nov 23 03:48:46 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:48:46 localhost podman[98940]: 2025-11-23 08:48:46.084654328 +0000 UTC m=+0.266179836 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step5, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:48:46 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:48:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:48:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:48:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:48:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:48:50 localhost systemd[1]: tmp-crun.l4sG8Y.mount: Deactivated successfully. Nov 23 03:48:50 localhost podman[99006]: 2025-11-23 08:48:50.926616424 +0000 UTC m=+0.105326483 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Nov 23 03:48:50 localhost podman[99007]: 2025-11-23 08:48:50.972118966 +0000 UTC m=+0.142760886 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, release=1761123044, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Nov 23 03:48:50 localhost podman[99007]: 2025-11-23 08:48:50.984259369 +0000 UTC m=+0.154901299 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-cron, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:48:50 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:48:51 localhost podman[99005]: 2025-11-23 08:48:51.076464048 +0000 UTC m=+0.255148666 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:48:51 localhost podman[99005]: 2025-11-23 08:48:51.121280448 +0000 UTC m=+0.299965096 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:48:51 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:48:51 localhost podman[99004]: 2025-11-23 08:48:51.125976793 +0000 UTC m=+0.306750366 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:48:51 localhost podman[99004]: 2025-11-23 08:48:51.205772659 +0000 UTC m=+0.386546192 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Nov 23 03:48:51 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:48:51 localhost podman[99006]: 2025-11-23 08:48:51.3477352 +0000 UTC m=+0.526445289 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, release=1761123044, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute) Nov 23 03:48:51 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:48:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:48:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:48:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:48:54 localhost systemd[1]: tmp-crun.JXBp0n.mount: Deactivated successfully. Nov 23 03:48:54 localhost podman[99099]: 2025-11-23 08:48:54.945323645 +0000 UTC m=+0.128401224 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12) Nov 23 03:48:54 localhost podman[99098]: 2025-11-23 08:48:54.914493546 +0000 UTC m=+0.101281259 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step1) Nov 23 03:48:54 localhost podman[99099]: 2025-11-23 08:48:54.988378171 +0000 UTC m=+0.171455770 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4) Nov 23 03:48:54 localhost podman[99099]: unhealthy Nov 23 03:48:55 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:48:55 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:48:55 localhost podman[99100]: 2025-11-23 08:48:55.05071238 +0000 UTC m=+0.231012203 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, url=https://www.redhat.com, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:48:55 localhost podman[99100]: 2025-11-23 08:48:55.095375465 +0000 UTC m=+0.275675318 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4) Nov 23 03:48:55 localhost podman[99100]: unhealthy Nov 23 03:48:55 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:48:55 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:48:55 localhost podman[99098]: 2025-11-23 08:48:55.146503209 +0000 UTC m=+0.333290912 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container) Nov 23 03:48:55 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:49:03 localhost systemd[1]: session-28.scope: Deactivated successfully. Nov 23 03:49:03 localhost systemd[1]: session-28.scope: Consumed 7min 19.992s CPU time. Nov 23 03:49:03 localhost systemd-logind[760]: Session 28 logged out. Waiting for processes to exit. Nov 23 03:49:03 localhost systemd-logind[760]: Removed session 28. Nov 23 03:49:13 localhost systemd[1]: Stopping User Manager for UID 1003... Nov 23 03:49:13 localhost systemd[35974]: Activating special unit Exit the Session... Nov 23 03:49:13 localhost systemd[35974]: Removed slice User Background Tasks Slice. Nov 23 03:49:13 localhost systemd[35974]: Stopped target Main User Target. Nov 23 03:49:13 localhost systemd[35974]: Stopped target Basic System. Nov 23 03:49:13 localhost systemd[35974]: Stopped target Paths. Nov 23 03:49:13 localhost systemd[35974]: Stopped target Sockets. Nov 23 03:49:13 localhost systemd[35974]: Stopped target Timers. Nov 23 03:49:13 localhost systemd[35974]: Stopped Mark boot as successful after the user session has run 2 minutes. Nov 23 03:49:13 localhost systemd[35974]: Stopped Daily Cleanup of User's Temporary Directories. Nov 23 03:49:13 localhost systemd[35974]: Closed D-Bus User Message Bus Socket. Nov 23 03:49:13 localhost systemd[35974]: Stopped Create User's Volatile Files and Directories. Nov 23 03:49:13 localhost systemd[35974]: Removed slice User Application Slice. Nov 23 03:49:13 localhost systemd[35974]: Reached target Shutdown. Nov 23 03:49:13 localhost systemd[35974]: Finished Exit the Session. Nov 23 03:49:13 localhost systemd[35974]: Reached target Exit the Session. Nov 23 03:49:13 localhost systemd[1]: user@1003.service: Deactivated successfully. Nov 23 03:49:13 localhost systemd[1]: Stopped User Manager for UID 1003. Nov 23 03:49:13 localhost systemd[1]: user@1003.service: Consumed 4.997s CPU time, read 0B from disk, written 7.0K to disk. Nov 23 03:49:13 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Nov 23 03:49:13 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Nov 23 03:49:13 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Nov 23 03:49:13 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Nov 23 03:49:13 localhost systemd[1]: Removed slice User Slice of UID 1003. Nov 23 03:49:13 localhost systemd[1]: user-1003.slice: Consumed 7min 25.019s CPU time. Nov 23 03:49:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:49:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:49:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:49:16 localhost podman[99168]: 2025-11-23 08:49:16.905388346 +0000 UTC m=+0.090819576 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git) Nov 23 03:49:16 localhost podman[99168]: 2025-11-23 08:49:16.912538417 +0000 UTC m=+0.097969687 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=collectd, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Nov 23 03:49:16 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:49:16 localhost systemd[1]: tmp-crun.6ilYyh.mount: Deactivated successfully. Nov 23 03:49:17 localhost podman[99169]: 2025-11-23 08:49:17.002516037 +0000 UTC m=+0.186244136 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, name=rhosp17/openstack-iscsid, vcs-type=git) Nov 23 03:49:17 localhost podman[99169]: 2025-11-23 08:49:17.014872047 +0000 UTC m=+0.198600096 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, container_name=iscsid, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step3, distribution-scope=public, name=rhosp17/openstack-iscsid) Nov 23 03:49:17 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:49:17 localhost podman[99170]: 2025-11-23 08:49:17.113961417 +0000 UTC m=+0.293785985 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Nov 23 03:49:17 localhost podman[99170]: 2025-11-23 08:49:17.145455427 +0000 UTC m=+0.325279995 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=nova_compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, distribution-scope=public, release=1761123044) Nov 23 03:49:17 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:49:17 localhost systemd[1]: tmp-crun.CyzEpj.mount: Deactivated successfully. Nov 23 03:49:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:49:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:49:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:49:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:49:21 localhost podman[99312]: 2025-11-23 08:49:21.907956637 +0000 UTC m=+0.086323029 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Nov 23 03:49:21 localhost podman[99310]: 2025-11-23 08:49:21.886706582 +0000 UTC m=+0.076027241 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4) Nov 23 03:49:21 localhost podman[99311]: 2025-11-23 08:49:21.953967253 +0000 UTC m=+0.135248684 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4) Nov 23 03:49:21 localhost podman[99310]: 2025-11-23 08:49:21.97530479 +0000 UTC m=+0.164625429 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:49:21 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:49:22 localhost podman[99318]: 2025-11-23 08:49:22.062936768 +0000 UTC m=+0.241503266 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, container_name=logrotate_crond, release=1761123044, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:49:22 localhost podman[99318]: 2025-11-23 08:49:22.0743668 +0000 UTC m=+0.252933348 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, container_name=logrotate_crond, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step4) Nov 23 03:49:22 localhost podman[99311]: 2025-11-23 08:49:22.081650834 +0000 UTC m=+0.262932255 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, version=17.1.12) Nov 23 03:49:22 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:49:22 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:49:22 localhost podman[99312]: 2025-11-23 08:49:22.291395262 +0000 UTC m=+0.469761674 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, version=17.1.12, distribution-scope=public, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=nova_migration_target) Nov 23 03:49:22 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:49:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:49:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:49:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:49:25 localhost podman[99404]: 2025-11-23 08:49:25.880518047 +0000 UTC m=+0.071310696 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, version=17.1.12, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Nov 23 03:49:25 localhost podman[99405]: 2025-11-23 08:49:25.899348277 +0000 UTC m=+0.083117010 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.) Nov 23 03:49:25 localhost podman[99404]: 2025-11-23 08:49:25.904502435 +0000 UTC m=+0.095294884 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=ovn_controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, io.buildah.version=1.41.4, release=1761123044) Nov 23 03:49:25 localhost podman[99404]: unhealthy Nov 23 03:49:25 localhost podman[99405]: 2025-11-23 08:49:25.914194564 +0000 UTC m=+0.097963327 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com) Nov 23 03:49:25 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:49:25 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:49:25 localhost podman[99405]: unhealthy Nov 23 03:49:25 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:49:25 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:49:25 localhost podman[99403]: 2025-11-23 08:49:25.998028035 +0000 UTC m=+0.186777031 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Nov 23 03:49:26 localhost podman[99403]: 2025-11-23 08:49:26.179518113 +0000 UTC m=+0.368267079 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:49:26 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:49:26 localhost systemd[1]: tmp-crun.AaMFv8.mount: Deactivated successfully. Nov 23 03:49:41 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:49:41 localhost recover_tripleo_nova_virtqemud[99476]: 62093 Nov 23 03:49:41 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:49:41 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:49:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:49:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:49:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:49:47 localhost podman[99477]: 2025-11-23 08:49:47.873378385 +0000 UTC m=+0.065517167 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step3, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:49:47 localhost podman[99477]: 2025-11-23 08:49:47.888653607 +0000 UTC m=+0.080792439 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, container_name=collectd, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, vcs-type=git, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:49:47 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:49:47 localhost podman[99479]: 2025-11-23 08:49:47.941086321 +0000 UTC m=+0.128064794 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, config_id=tripleo_step5, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Nov 23 03:49:47 localhost podman[99479]: 2025-11-23 08:49:47.966662668 +0000 UTC m=+0.153641141 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, architecture=x86_64, container_name=nova_compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git) Nov 23 03:49:47 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:49:48 localhost podman[99478]: 2025-11-23 08:49:48.027797421 +0000 UTC m=+0.219175700 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, distribution-scope=public) Nov 23 03:49:48 localhost podman[99478]: 2025-11-23 08:49:48.040326086 +0000 UTC m=+0.231704395 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, container_name=iscsid, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044) Nov 23 03:49:48 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:49:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:49:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:49:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:49:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:49:52 localhost podman[99542]: 2025-11-23 08:49:52.911352268 +0000 UTC m=+0.089554928 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:49:52 localhost systemd[1]: tmp-crun.cDIsUs.mount: Deactivated successfully. Nov 23 03:49:52 localhost podman[99540]: 2025-11-23 08:49:52.968555799 +0000 UTC m=+0.152889497 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, vcs-type=git) Nov 23 03:49:53 localhost podman[99541]: 2025-11-23 08:49:53.006392784 +0000 UTC m=+0.186232145 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git) Nov 23 03:49:53 localhost podman[99540]: 2025-11-23 08:49:53.024521572 +0000 UTC m=+0.208855300 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:49:53 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:49:53 localhost podman[99541]: 2025-11-23 08:49:53.065312718 +0000 UTC m=+0.245152099 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible) Nov 23 03:49:53 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:49:53 localhost podman[99548]: 2025-11-23 08:49:53.116107342 +0000 UTC m=+0.289793713 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, release=1761123044, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:49:53 localhost podman[99548]: 2025-11-23 08:49:53.162154369 +0000 UTC m=+0.335840730 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, vcs-type=git, version=17.1.12, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64) Nov 23 03:49:53 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:49:53 localhost podman[99542]: 2025-11-23 08:49:53.237286292 +0000 UTC m=+0.415488922 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Nov 23 03:49:53 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:49:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:49:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:49:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:49:56 localhost podman[99630]: 2025-11-23 08:49:56.902301915 +0000 UTC m=+0.086796082 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, version=17.1.12, vcs-type=git, container_name=metrics_qdr, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4) Nov 23 03:49:56 localhost podman[99631]: 2025-11-23 08:49:56.959043773 +0000 UTC m=+0.138607179 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, managed_by=tripleo_ansible) Nov 23 03:49:56 localhost podman[99631]: 2025-11-23 08:49:56.978373618 +0000 UTC m=+0.157937004 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4) Nov 23 03:49:56 localhost podman[99631]: unhealthy Nov 23 03:49:56 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:49:56 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:49:57 localhost podman[99632]: 2025-11-23 08:49:57.081954117 +0000 UTC m=+0.258314194 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, config_id=tripleo_step4, vcs-type=git) Nov 23 03:49:57 localhost podman[99632]: 2025-11-23 08:49:57.122523375 +0000 UTC m=+0.298883462 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step4) Nov 23 03:49:57 localhost podman[99632]: unhealthy Nov 23 03:49:57 localhost podman[99630]: 2025-11-23 08:49:57.135517916 +0000 UTC m=+0.320012083 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container) Nov 23 03:49:57 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:49:57 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:49:57 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:49:57 localhost systemd[1]: tmp-crun.fVuTOZ.mount: Deactivated successfully. Nov 23 03:50:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:50:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:50:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:50:18 localhost podman[99699]: 2025-11-23 08:50:18.898135987 +0000 UTC m=+0.081695516 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, distribution-scope=public, vcs-type=git, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Nov 23 03:50:18 localhost podman[99699]: 2025-11-23 08:50:18.911329204 +0000 UTC m=+0.094888783 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, container_name=collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_step3, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Nov 23 03:50:18 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:50:18 localhost podman[99700]: 2025-11-23 08:50:18.96901783 +0000 UTC m=+0.148080090 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=) Nov 23 03:50:19 localhost podman[99701]: 2025-11-23 08:50:19.030642587 +0000 UTC m=+0.205910250 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, release=1761123044, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:36:58Z) Nov 23 03:50:19 localhost podman[99701]: 2025-11-23 08:50:19.061460846 +0000 UTC m=+0.236728559 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, vcs-type=git) Nov 23 03:50:19 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:50:19 localhost podman[99700]: 2025-11-23 08:50:19.082934137 +0000 UTC m=+0.261996377 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 23 03:50:19 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:50:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:50:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:50:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:50:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:50:23 localhost podman[99843]: 2025-11-23 08:50:23.905080794 +0000 UTC m=+0.086957969 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-19T00:36:58Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:50:23 localhost podman[99841]: 2025-11-23 08:50:23.961655896 +0000 UTC m=+0.143555972 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Nov 23 03:50:24 localhost podman[99842]: 2025-11-23 08:50:24.011590593 +0000 UTC m=+0.189866446 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, url=https://www.redhat.com) Nov 23 03:50:24 localhost podman[99841]: 2025-11-23 08:50:24.022384765 +0000 UTC m=+0.204284901 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, config_id=tripleo_step4, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 23 03:50:24 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:50:24 localhost podman[99842]: 2025-11-23 08:50:24.050301014 +0000 UTC m=+0.228576917 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=ceilometer_agent_ipmi, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:50:24 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:50:24 localhost podman[99844]: 2025-11-23 08:50:24.115413549 +0000 UTC m=+0.291682841 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:50:24 localhost podman[99844]: 2025-11-23 08:50:24.150433107 +0000 UTC m=+0.326702359 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-cron, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Nov 23 03:50:24 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:50:24 localhost podman[99843]: 2025-11-23 08:50:24.268276785 +0000 UTC m=+0.450153930 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:50:24 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:50:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:50:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:50:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:50:27 localhost podman[99932]: 2025-11-23 08:50:27.908872956 +0000 UTC m=+0.093730408 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, name=rhosp17/openstack-qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Nov 23 03:50:27 localhost systemd[1]: tmp-crun.ZAIcIG.mount: Deactivated successfully. Nov 23 03:50:27 localhost podman[99933]: 2025-11-23 08:50:27.984264597 +0000 UTC m=+0.167092676 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, container_name=ovn_controller, config_id=tripleo_step4, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 23 03:50:28 localhost podman[99933]: 2025-11-23 08:50:28.031443639 +0000 UTC m=+0.214271749 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, version=17.1.12) Nov 23 03:50:28 localhost podman[99933]: unhealthy Nov 23 03:50:28 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:50:28 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:50:28 localhost podman[99934]: 2025-11-23 08:50:28.112667909 +0000 UTC m=+0.293200007 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-type=git, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Nov 23 03:50:28 localhost podman[99934]: 2025-11-23 08:50:28.13021318 +0000 UTC m=+0.310745278 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 23 03:50:28 localhost podman[99934]: unhealthy Nov 23 03:50:28 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:50:28 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:50:28 localhost podman[99932]: 2025-11-23 08:50:28.177388573 +0000 UTC m=+0.362246045 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, version=17.1.12, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:50:28 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:50:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:50:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:50:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:50:49 localhost podman[99997]: 2025-11-23 08:50:49.908214884 +0000 UTC m=+0.092116567 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, release=1761123044) Nov 23 03:50:49 localhost podman[99997]: 2025-11-23 08:50:49.918449078 +0000 UTC m=+0.102350761 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, container_name=collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:50:49 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:50:50 localhost systemd[1]: tmp-crun.vb1UHg.mount: Deactivated successfully. Nov 23 03:50:50 localhost podman[99999]: 2025-11-23 08:50:50.0156291 +0000 UTC m=+0.193912320 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, container_name=nova_compute, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, version=17.1.12, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044) Nov 23 03:50:50 localhost podman[99998]: 2025-11-23 08:50:50.062024359 +0000 UTC m=+0.242484576 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:50:50 localhost podman[99998]: 2025-11-23 08:50:50.071500341 +0000 UTC m=+0.251960528 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com) Nov 23 03:50:50 localhost podman[99999]: 2025-11-23 08:50:50.074585966 +0000 UTC m=+0.252869156 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 23 03:50:50 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:50:50 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:50:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:50:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:50:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:50:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:50:54 localhost systemd[1]: tmp-crun.6MJCgF.mount: Deactivated successfully. Nov 23 03:50:54 localhost podman[100062]: 2025-11-23 08:50:54.90088925 +0000 UTC m=+0.082065417 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.12, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:50:54 localhost podman[100064]: 2025-11-23 08:50:54.910807255 +0000 UTC m=+0.086737071 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:50:54 localhost podman[100064]: 2025-11-23 08:50:54.971675359 +0000 UTC m=+0.147605205 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, name=rhosp17/openstack-cron, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Nov 23 03:50:54 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:50:55 localhost podman[100061]: 2025-11-23 08:50:54.950617111 +0000 UTC m=+0.136721590 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:50:55 localhost podman[100063]: 2025-11-23 08:50:54.973534677 +0000 UTC m=+0.151217717 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Nov 23 03:50:55 localhost podman[100061]: 2025-11-23 08:50:55.030696186 +0000 UTC m=+0.216800665 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc.) Nov 23 03:50:55 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:50:55 localhost podman[100062]: 2025-11-23 08:50:55.078414035 +0000 UTC m=+0.259590202 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=ceilometer_agent_ipmi) Nov 23 03:50:55 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:50:55 localhost podman[100063]: 2025-11-23 08:50:55.398616413 +0000 UTC m=+0.576299423 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, release=1761123044, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:50:55 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:50:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:50:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:50:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:50:58 localhost podman[100153]: 2025-11-23 08:50:58.917227449 +0000 UTC m=+0.097880544 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=metrics_qdr, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64) Nov 23 03:50:58 localhost podman[100154]: 2025-11-23 08:50:58.966671081 +0000 UTC m=+0.145618744 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, url=https://www.redhat.com, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ovn_controller, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 23 03:50:59 localhost podman[100154]: 2025-11-23 08:50:59.008473998 +0000 UTC m=+0.187421621 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller) Nov 23 03:50:59 localhost podman[100154]: unhealthy Nov 23 03:50:59 localhost podman[100155]: 2025-11-23 08:50:59.018178487 +0000 UTC m=+0.192283711 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, vcs-type=git) Nov 23 03:50:59 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:50:59 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:50:59 localhost podman[100155]: 2025-11-23 08:50:59.059705166 +0000 UTC m=+0.233810410 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1) Nov 23 03:50:59 localhost podman[100155]: unhealthy Nov 23 03:50:59 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:50:59 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:50:59 localhost podman[100153]: 2025-11-23 08:50:59.111469269 +0000 UTC m=+0.292122394 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:50:59 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:51:06 localhost sshd[100223]: main: sshd: ssh-rsa algorithm is disabled Nov 23 03:51:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:51:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:51:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:51:20 localhost podman[100225]: 2025-11-23 08:51:20.902925988 +0000 UTC m=+0.090887980 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step3, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-collectd) Nov 23 03:51:20 localhost podman[100225]: 2025-11-23 08:51:20.916426833 +0000 UTC m=+0.104388815 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, vcs-type=git, release=1761123044, com.redhat.component=openstack-collectd-container, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:51:20 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:51:21 localhost podman[100227]: 2025-11-23 08:51:21.000849462 +0000 UTC m=+0.184351336 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step5, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:51:21 localhost podman[100226]: 2025-11-23 08:51:21.052804151 +0000 UTC m=+0.238954037 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public) Nov 23 03:51:21 localhost podman[100226]: 2025-11-23 08:51:21.063207032 +0000 UTC m=+0.249356938 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, release=1761123044, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, container_name=iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git) Nov 23 03:51:21 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:51:21 localhost podman[100227]: 2025-11-23 08:51:21.103316556 +0000 UTC m=+0.286818400 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Nov 23 03:51:21 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:51:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:51:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:51:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:51:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:51:25 localhost podman[100364]: 2025-11-23 08:51:25.910826021 +0000 UTC m=+0.089670652 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, version=17.1.12, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z) Nov 23 03:51:25 localhost podman[100364]: 2025-11-23 08:51:25.944408045 +0000 UTC m=+0.123252636 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:51:25 localhost podman[100365]: 2025-11-23 08:51:25.966485005 +0000 UTC m=+0.142197538 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, release=1761123044, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container) Nov 23 03:51:26 localhost podman[100363]: 2025-11-23 08:51:26.036392607 +0000 UTC m=+0.215512056 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, version=17.1.12, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1) Nov 23 03:51:26 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:51:26 localhost podman[100366]: 2025-11-23 08:51:26.12221837 +0000 UTC m=+0.296110828 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-cron, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:51:26 localhost podman[100366]: 2025-11-23 08:51:26.135587432 +0000 UTC m=+0.309479920 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, io.openshift.expose-services=) Nov 23 03:51:26 localhost podman[100363]: 2025-11-23 08:51:26.145814946 +0000 UTC m=+0.324934395 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:51:26 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:51:26 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:51:26 localhost podman[100365]: 2025-11-23 08:51:26.362952661 +0000 UTC m=+0.538665134 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:51:26 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:51:26 localhost systemd[1]: tmp-crun.Vde80C.mount: Deactivated successfully. Nov 23 03:51:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:51:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:51:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:51:29 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:51:29 localhost recover_tripleo_nova_virtqemud[100476]: 62093 Nov 23 03:51:29 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:51:29 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:51:29 localhost podman[100460]: 2025-11-23 08:51:29.902727758 +0000 UTC m=+0.087328979 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, container_name=metrics_qdr, tcib_managed=true, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Nov 23 03:51:29 localhost systemd[1]: tmp-crun.Ka1CB9.mount: Deactivated successfully. Nov 23 03:51:29 localhost podman[100461]: 2025-11-23 08:51:29.960978271 +0000 UTC m=+0.142776626 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:51:30 localhost podman[100462]: 2025-11-23 08:51:30.006627616 +0000 UTC m=+0.184440178 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:51:30 localhost podman[100461]: 2025-11-23 08:51:30.025934821 +0000 UTC m=+0.207733126 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:51:30 localhost podman[100461]: unhealthy Nov 23 03:51:30 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:51:30 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:51:30 localhost podman[100462]: 2025-11-23 08:51:30.04538689 +0000 UTC m=+0.223199462 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:51:30 localhost podman[100462]: unhealthy Nov 23 03:51:30 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:51:30 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:51:30 localhost podman[100460]: 2025-11-23 08:51:30.105288014 +0000 UTC m=+0.289889215 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, release=1761123044) Nov 23 03:51:30 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:51:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:51:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:51:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:51:51 localhost podman[100531]: 2025-11-23 08:51:51.89732373 +0000 UTC m=+0.080151069 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, tcib_managed=true, com.redhat.component=openstack-iscsid-container, container_name=iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid) Nov 23 03:51:51 localhost podman[100531]: 2025-11-23 08:51:51.931548633 +0000 UTC m=+0.114375932 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, container_name=iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, config_id=tripleo_step3, architecture=x86_64) Nov 23 03:51:51 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:51:51 localhost podman[100530]: 2025-11-23 08:51:51.950503017 +0000 UTC m=+0.135930286 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-collectd) Nov 23 03:51:51 localhost podman[100530]: 2025-11-23 08:51:51.963243489 +0000 UTC m=+0.148670738 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, container_name=collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12) Nov 23 03:51:51 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:51:52 localhost podman[100532]: 2025-11-23 08:51:52.048950468 +0000 UTC m=+0.228439144 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:51:52 localhost podman[100532]: 2025-11-23 08:51:52.106415607 +0000 UTC m=+0.285904263 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, container_name=nova_compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:51:52 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:51:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:51:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:51:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:51:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:51:56 localhost podman[100598]: 2025-11-23 08:51:56.921945299 +0000 UTC m=+0.097150421 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step4, release=1761123044, com.redhat.component=openstack-cron-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, batch=17.1_20251118.1, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:51:56 localhost podman[100595]: 2025-11-23 08:51:56.898327802 +0000 UTC m=+0.083007567 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:51:56 localhost systemd[1]: tmp-crun.hYqDa9.mount: Deactivated successfully. Nov 23 03:51:56 localhost podman[100596]: 2025-11-23 08:51:56.961221438 +0000 UTC m=+0.139662521 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi) Nov 23 03:51:56 localhost podman[100595]: 2025-11-23 08:51:56.982443822 +0000 UTC m=+0.167123607 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute) Nov 23 03:51:56 localhost podman[100596]: 2025-11-23 08:51:56.993648577 +0000 UTC m=+0.172089630 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container) Nov 23 03:51:56 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:51:57 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:51:57 localhost podman[100597]: 2025-11-23 08:51:57.054682676 +0000 UTC m=+0.232860690 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, container_name=nova_migration_target) Nov 23 03:51:57 localhost podman[100598]: 2025-11-23 08:51:57.057218944 +0000 UTC m=+0.232424036 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z) Nov 23 03:51:57 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:51:57 localhost podman[100597]: 2025-11-23 08:51:57.408495108 +0000 UTC m=+0.586673222 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-19T00:36:58Z, tcib_managed=true, batch=17.1_20251118.1, container_name=nova_migration_target, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Nov 23 03:51:57 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:52:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:52:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:52:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:52:00 localhost podman[100688]: 2025-11-23 08:52:00.894255672 +0000 UTC m=+0.080867360 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:49:46Z) Nov 23 03:52:00 localhost podman[100690]: 2025-11-23 08:52:00.950254716 +0000 UTC m=+0.131946903 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git) Nov 23 03:52:01 localhost podman[100690]: 2025-11-23 08:52:01.003843936 +0000 UTC m=+0.185536143 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true) Nov 23 03:52:01 localhost podman[100690]: unhealthy Nov 23 03:52:01 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:52:01 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:52:01 localhost podman[100689]: 2025-11-23 08:52:01.005168888 +0000 UTC m=+0.191132936 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Nov 23 03:52:01 localhost podman[100688]: 2025-11-23 08:52:01.080365823 +0000 UTC m=+0.266977471 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, release=1761123044, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:52:01 localhost podman[100689]: 2025-11-23 08:52:01.093775715 +0000 UTC m=+0.279739723 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=ovn_controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, version=17.1.12) Nov 23 03:52:01 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:52:01 localhost podman[100689]: unhealthy Nov 23 03:52:01 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:52:01 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:52:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 03:52:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 5370 writes, 23K keys, 5370 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5370 writes, 735 syncs, 7.31 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 03:52:01 localhost systemd[1]: tmp-crun.QffNez.mount: Deactivated successfully. Nov 23 03:52:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 03:52:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 5205 writes, 23K keys, 5205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5205 writes, 665 syncs, 7.83 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 03:52:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:52:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:52:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:52:22 localhost podman[100756]: 2025-11-23 08:52:22.904379524 +0000 UTC m=+0.084826772 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, config_id=tripleo_step5, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 23 03:52:22 localhost podman[100756]: 2025-11-23 08:52:22.931244382 +0000 UTC m=+0.111691620 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step5, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com) Nov 23 03:52:22 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:52:22 localhost podman[100755]: 2025-11-23 08:52:22.941031973 +0000 UTC m=+0.122481232 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, distribution-scope=public, container_name=iscsid, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 23 03:52:22 localhost podman[100755]: 2025-11-23 08:52:22.974847684 +0000 UTC m=+0.156296943 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, container_name=iscsid) Nov 23 03:52:23 localhost podman[100754]: 2025-11-23 08:52:23.004073454 +0000 UTC m=+0.186855354 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, container_name=collectd, name=rhosp17/openstack-collectd, version=17.1.12, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc.) Nov 23 03:52:23 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:52:23 localhost podman[100754]: 2025-11-23 08:52:23.035418349 +0000 UTC m=+0.218200219 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z) Nov 23 03:52:23 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:52:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:52:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:52:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:52:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:52:27 localhost podman[100900]: 2025-11-23 08:52:27.903288583 +0000 UTC m=+0.081494191 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc.) Nov 23 03:52:27 localhost systemd[1]: tmp-crun.V7H9ec.mount: Deactivated successfully. Nov 23 03:52:27 localhost podman[100898]: 2025-11-23 08:52:27.963856347 +0000 UTC m=+0.147254024 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, container_name=ceilometer_agent_compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute) Nov 23 03:52:28 localhost podman[100899]: 2025-11-23 08:52:28.018083307 +0000 UTC m=+0.199020789 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, release=1761123044, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.) Nov 23 03:52:28 localhost podman[100899]: 2025-11-23 08:52:28.05328525 +0000 UTC m=+0.234222762 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, architecture=x86_64, maintainer=OpenStack TripleO Team) Nov 23 03:52:28 localhost podman[100901]: 2025-11-23 08:52:28.060539844 +0000 UTC m=+0.233740977 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, tcib_managed=true, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1) Nov 23 03:52:28 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:52:28 localhost podman[100898]: 2025-11-23 08:52:28.071731829 +0000 UTC m=+0.255129546 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1) Nov 23 03:52:28 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:52:28 localhost podman[100901]: 2025-11-23 08:52:28.099506874 +0000 UTC m=+0.272707997 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:52:28 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:52:28 localhost podman[100900]: 2025-11-23 08:52:28.260618313 +0000 UTC m=+0.438823881 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:36:58Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team) Nov 23 03:52:28 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:52:28 localhost systemd[1]: tmp-crun.v3va8F.mount: Deactivated successfully. Nov 23 03:52:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:52:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:52:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:52:31 localhost podman[100995]: 2025-11-23 08:52:31.896159358 +0000 UTC m=+0.083764700 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=metrics_qdr, vendor=Red Hat, Inc., config_id=tripleo_step1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:52:31 localhost podman[100996]: 2025-11-23 08:52:31.945621601 +0000 UTC m=+0.131179149 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, release=1761123044, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Nov 23 03:52:31 localhost podman[100996]: 2025-11-23 08:52:31.98845553 +0000 UTC m=+0.174013088 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z) Nov 23 03:52:31 localhost systemd[1]: tmp-crun.odNmjA.mount: Deactivated successfully. Nov 23 03:52:31 localhost podman[100996]: unhealthy Nov 23 03:52:32 localhost podman[100997]: 2025-11-23 08:52:32.002735269 +0000 UTC m=+0.184962845 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:52:32 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:52:32 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:52:32 localhost podman[100997]: 2025-11-23 08:52:32.039914634 +0000 UTC m=+0.222142210 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 03:52:32 localhost podman[100997]: unhealthy Nov 23 03:52:32 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:52:32 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:52:32 localhost podman[100995]: 2025-11-23 08:52:32.131642938 +0000 UTC m=+0.319248200 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:52:32 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:52:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:52:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:52:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:52:53 localhost systemd[1]: tmp-crun.b2HOUC.mount: Deactivated successfully. Nov 23 03:52:53 localhost podman[101064]: 2025-11-23 08:52:53.907156616 +0000 UTC m=+0.092554680 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, name=rhosp17/openstack-collectd, container_name=collectd, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:52:53 localhost podman[101065]: 2025-11-23 08:52:53.941238385 +0000 UTC m=+0.123452372 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, container_name=iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, tcib_managed=true, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step3) Nov 23 03:52:53 localhost podman[101065]: 2025-11-23 08:52:53.979435691 +0000 UTC m=+0.161649638 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step3, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 23 03:52:53 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:52:53 localhost podman[101064]: 2025-11-23 08:52:53.99531542 +0000 UTC m=+0.180713504 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, name=rhosp17/openstack-collectd, container_name=collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1) Nov 23 03:52:54 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:52:54 localhost podman[101066]: 2025-11-23 08:52:54.054164091 +0000 UTC m=+0.233368025 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, version=17.1.12, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:52:54 localhost podman[101066]: 2025-11-23 08:52:54.083235727 +0000 UTC m=+0.262439691 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, container_name=nova_compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, release=1761123044, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step5, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:52:54 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:52:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:52:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:52:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:52:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:52:58 localhost podman[101125]: 2025-11-23 08:52:58.899906034 +0000 UTC m=+0.083770480 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12) Nov 23 03:52:58 localhost podman[101128]: 2025-11-23 08:52:58.951895705 +0000 UTC m=+0.128597490 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, vcs-type=git, name=rhosp17/openstack-cron) Nov 23 03:52:58 localhost podman[101125]: 2025-11-23 08:52:58.956480716 +0000 UTC m=+0.140345152 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:52:58 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:52:59 localhost podman[101127]: 2025-11-23 08:52:59.005818995 +0000 UTC m=+0.183758898 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., container_name=nova_migration_target, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, name=rhosp17/openstack-nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:52:59 localhost podman[101128]: 2025-11-23 08:52:59.010463259 +0000 UTC m=+0.187165044 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12) Nov 23 03:52:59 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:52:59 localhost podman[101126]: 2025-11-23 08:52:59.0608573 +0000 UTC m=+0.242951411 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi) Nov 23 03:52:59 localhost podman[101126]: 2025-11-23 08:52:59.082543527 +0000 UTC m=+0.264637638 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:52:59 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:52:59 localhost podman[101127]: 2025-11-23 08:52:59.367899662 +0000 UTC m=+0.545839555 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git) Nov 23 03:52:59 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:53:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:53:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:53:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:53:02 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:53:02 localhost recover_tripleo_nova_virtqemud[101234]: 62093 Nov 23 03:53:02 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:53:02 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:53:02 localhost podman[101218]: 2025-11-23 08:53:02.899058234 +0000 UTC m=+0.084008377 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 23 03:53:02 localhost systemd[1]: tmp-crun.3Otv68.mount: Deactivated successfully. Nov 23 03:53:02 localhost podman[101219]: 2025-11-23 08:53:02.964029724 +0000 UTC m=+0.146434439 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, container_name=ovn_controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 23 03:53:02 localhost podman[101220]: 2025-11-23 08:53:02.993507102 +0000 UTC m=+0.172164572 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, architecture=x86_64) Nov 23 03:53:03 localhost podman[101220]: 2025-11-23 08:53:03.03340173 +0000 UTC m=+0.212059190 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, version=17.1.12) Nov 23 03:53:03 localhost podman[101220]: unhealthy Nov 23 03:53:03 localhost podman[101219]: 2025-11-23 08:53:03.043843781 +0000 UTC m=+0.226248446 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z) Nov 23 03:53:03 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:53:03 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:53:03 localhost podman[101219]: unhealthy Nov 23 03:53:03 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:53:03 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:53:03 localhost podman[101218]: 2025-11-23 08:53:03.099482604 +0000 UTC m=+0.284432737 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, architecture=x86_64) Nov 23 03:53:03 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:53:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:53:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:53:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:53:24 localhost podman[101288]: 2025-11-23 08:53:24.905523061 +0000 UTC m=+0.089196026 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git) Nov 23 03:53:24 localhost podman[101288]: 2025-11-23 08:53:24.918291164 +0000 UTC m=+0.101964199 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc.) Nov 23 03:53:24 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:53:25 localhost podman[101287]: 2025-11-23 08:53:25.006897613 +0000 UTC m=+0.191603631 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:53:25 localhost podman[101287]: 2025-11-23 08:53:25.015274081 +0000 UTC m=+0.199980139 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, version=17.1.12, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible) Nov 23 03:53:25 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:53:25 localhost podman[101289]: 2025-11-23 08:53:25.108122229 +0000 UTC m=+0.288763121 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1) Nov 23 03:53:25 localhost podman[101289]: 2025-11-23 08:53:25.140261068 +0000 UTC m=+0.320901960 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_id=tripleo_step5, batch=17.1_20251118.1, container_name=nova_compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044) Nov 23 03:53:25 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:53:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:53:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:53:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:53:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:53:29 localhost systemd[1]: tmp-crun.qweUnC.mount: Deactivated successfully. Nov 23 03:53:30 localhost podman[101431]: 2025-11-23 08:53:29.97886126 +0000 UTC m=+0.156144288 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc.) Nov 23 03:53:30 localhost podman[101430]: 2025-11-23 08:53:30.035918797 +0000 UTC m=+0.219159929 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, distribution-scope=public) Nov 23 03:53:30 localhost podman[101432]: 2025-11-23 08:53:30.007309767 +0000 UTC m=+0.141084486 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1761123044, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z) Nov 23 03:53:30 localhost podman[101436]: 2025-11-23 08:53:29.949797275 +0000 UTC m=+0.122880004 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, container_name=logrotate_crond) Nov 23 03:53:30 localhost podman[101430]: 2025-11-23 08:53:30.092030515 +0000 UTC m=+0.275271627 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 03:53:30 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:53:30 localhost podman[101431]: 2025-11-23 08:53:30.115733384 +0000 UTC m=+0.293016422 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1761123044, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12) Nov 23 03:53:30 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:53:30 localhost podman[101436]: 2025-11-23 08:53:30.142202329 +0000 UTC m=+0.315285098 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.12, com.redhat.component=openstack-cron-container) Nov 23 03:53:30 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:53:30 localhost podman[101432]: 2025-11-23 08:53:30.368301339 +0000 UTC m=+0.502076018 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, release=1761123044, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1) Nov 23 03:53:30 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:53:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:53:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:53:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:53:33 localhost systemd[1]: tmp-crun.SXvDL8.mount: Deactivated successfully. Nov 23 03:53:33 localhost podman[101522]: 2025-11-23 08:53:33.905174689 +0000 UTC m=+0.088826765 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4) Nov 23 03:53:33 localhost podman[101523]: 2025-11-23 08:53:33.956769767 +0000 UTC m=+0.139226806 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:53:33 localhost podman[101523]: 2025-11-23 08:53:33.974313168 +0000 UTC m=+0.156770177 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, release=1761123044, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 23 03:53:33 localhost podman[101523]: unhealthy Nov 23 03:53:33 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:53:33 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:53:34 localhost podman[101524]: 2025-11-23 08:53:34.062115931 +0000 UTC m=+0.242901779 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Nov 23 03:53:34 localhost podman[101522]: 2025-11-23 08:53:34.097430038 +0000 UTC m=+0.281082124 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=) Nov 23 03:53:34 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:53:34 localhost podman[101524]: 2025-11-23 08:53:34.15432765 +0000 UTC m=+0.335113518 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Nov 23 03:53:34 localhost podman[101524]: unhealthy Nov 23 03:53:34 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:53:34 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:53:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:53:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:53:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:53:55 localhost systemd[1]: tmp-crun.Yq9MP5.mount: Deactivated successfully. Nov 23 03:53:55 localhost podman[101589]: 2025-11-23 08:53:55.957122868 +0000 UTC m=+0.136042740 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 23 03:53:55 localhost podman[101587]: 2025-11-23 08:53:55.919125108 +0000 UTC m=+0.102673843 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:53:56 localhost podman[101587]: 2025-11-23 08:53:56.011382508 +0000 UTC m=+0.194931243 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=collectd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:53:56 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:53:56 localhost podman[101589]: 2025-11-23 08:53:56.032829379 +0000 UTC m=+0.211749241 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:53:56 localhost podman[101588]: 2025-11-23 08:53:56.008355025 +0000 UTC m=+0.190709812 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, container_name=iscsid, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.) Nov 23 03:53:56 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:53:56 localhost podman[101588]: 2025-11-23 08:53:56.092440153 +0000 UTC m=+0.274794880 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-iscsid, container_name=iscsid, architecture=x86_64, config_id=tripleo_step3, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true) Nov 23 03:53:56 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:54:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:54:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:54:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:54:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:54:00 localhost podman[101653]: 2025-11-23 08:54:00.910537706 +0000 UTC m=+0.091692115 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, version=17.1.12) Nov 23 03:54:00 localhost podman[101661]: 2025-11-23 08:54:00.958551354 +0000 UTC m=+0.126945389 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:54:00 localhost podman[101653]: 2025-11-23 08:54:00.967916432 +0000 UTC m=+0.149070801 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team) Nov 23 03:54:00 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:54:00 localhost podman[101661]: 2025-11-23 08:54:00.994433288 +0000 UTC m=+0.162827343 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64, container_name=logrotate_crond, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Nov 23 03:54:01 localhost systemd[1]: tmp-crun.oelmCA.mount: Deactivated successfully. Nov 23 03:54:01 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:54:01 localhost podman[101654]: 2025-11-23 08:54:01.011905616 +0000 UTC m=+0.187985158 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team) Nov 23 03:54:01 localhost podman[101655]: 2025-11-23 08:54:01.072846012 +0000 UTC m=+0.247243712 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=nova_migration_target, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git) Nov 23 03:54:01 localhost podman[101654]: 2025-11-23 08:54:01.090483425 +0000 UTC m=+0.266562987 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044) Nov 23 03:54:01 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:54:01 localhost podman[101655]: 2025-11-23 08:54:01.434333571 +0000 UTC m=+0.608731271 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-nova-compute) Nov 23 03:54:01 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:54:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:54:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:54:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:54:04 localhost podman[101746]: 2025-11-23 08:54:04.902144842 +0000 UTC m=+0.087766443 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:54:04 localhost systemd[1]: tmp-crun.B2Dl2q.mount: Deactivated successfully. Nov 23 03:54:04 localhost podman[101747]: 2025-11-23 08:54:04.961106648 +0000 UTC m=+0.144248053 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:54:05 localhost podman[101747]: 2025-11-23 08:54:05.004021539 +0000 UTC m=+0.187162934 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, version=17.1.12, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1) Nov 23 03:54:05 localhost podman[101747]: unhealthy Nov 23 03:54:05 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:54:05 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:54:05 localhost podman[101748]: 2025-11-23 08:54:05.009450176 +0000 UTC m=+0.186062729 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, release=1761123044, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Nov 23 03:54:05 localhost podman[101748]: 2025-11-23 08:54:05.09240737 +0000 UTC m=+0.269019893 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, batch=17.1_20251118.1) Nov 23 03:54:05 localhost podman[101748]: unhealthy Nov 23 03:54:05 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:54:05 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:54:05 localhost podman[101746]: 2025-11-23 08:54:05.11740612 +0000 UTC m=+0.303027711 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, distribution-scope=public, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Nov 23 03:54:05 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:54:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:54:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:54:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:54:26 localhost podman[101815]: 2025-11-23 08:54:26.901177554 +0000 UTC m=+0.086217055 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, container_name=collectd, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Nov 23 03:54:26 localhost podman[101815]: 2025-11-23 08:54:26.909655875 +0000 UTC m=+0.094695366 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, version=17.1.12, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.openshift.expose-services=) Nov 23 03:54:26 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:54:26 localhost systemd[1]: tmp-crun.yXvnuM.mount: Deactivated successfully. Nov 23 03:54:26 localhost podman[101817]: 2025-11-23 08:54:26.969011152 +0000 UTC m=+0.145820450 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, container_name=nova_compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_id=tripleo_step5, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Nov 23 03:54:27 localhost podman[101817]: 2025-11-23 08:54:27.000529262 +0000 UTC m=+0.177338530 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, release=1761123044, distribution-scope=public, container_name=nova_compute, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true) Nov 23 03:54:27 localhost podman[101816]: 2025-11-23 08:54:27.014134961 +0000 UTC m=+0.194598462 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, tcib_managed=true, version=17.1.12, container_name=iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64) Nov 23 03:54:27 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:54:27 localhost podman[101816]: 2025-11-23 08:54:27.030432293 +0000 UTC m=+0.210895724 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:54:27 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:54:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:54:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:54:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:54:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:54:31 localhost podman[102007]: 2025-11-23 08:54:31.907652905 +0000 UTC m=+0.088388173 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, release=1761123044, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git) Nov 23 03:54:31 localhost podman[102008]: 2025-11-23 08:54:31.978532717 +0000 UTC m=+0.155824029 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:54:32 localhost podman[102010]: 2025-11-23 08:54:32.022288314 +0000 UTC m=+0.196042166 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, release=1761123044, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:54:32 localhost podman[102010]: 2025-11-23 08:54:32.034312974 +0000 UTC m=+0.208066826 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:54:32 localhost podman[102007]: 2025-11-23 08:54:32.04394855 +0000 UTC m=+0.224683818 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:54:32 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:54:32 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:54:32 localhost podman[102009]: 2025-11-23 08:54:32.114312998 +0000 UTC m=+0.293027383 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:54:32 localhost podman[102008]: 2025-11-23 08:54:32.142347731 +0000 UTC m=+0.319639053 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, version=17.1.12) Nov 23 03:54:32 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:54:32 localhost podman[102009]: 2025-11-23 08:54:32.49933384 +0000 UTC m=+0.678048285 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:54:32 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:54:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:54:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:54:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:54:35 localhost systemd[1]: tmp-crun.KqDQkq.mount: Deactivated successfully. Nov 23 03:54:35 localhost podman[102105]: 2025-11-23 08:54:35.911787377 +0000 UTC m=+0.095854902 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, config_id=tripleo_step1, container_name=metrics_qdr, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Nov 23 03:54:35 localhost systemd[1]: tmp-crun.ax22Dq.mount: Deactivated successfully. Nov 23 03:54:35 localhost podman[102106]: 2025-11-23 08:54:35.966039337 +0000 UTC m=+0.148919975 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=) Nov 23 03:54:36 localhost podman[102107]: 2025-11-23 08:54:36.009029161 +0000 UTC m=+0.189836615 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z) Nov 23 03:54:36 localhost podman[102107]: 2025-11-23 08:54:36.027305643 +0000 UTC m=+0.208113167 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:54:36 localhost podman[102107]: unhealthy Nov 23 03:54:36 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:54:36 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:54:36 localhost podman[102106]: 2025-11-23 08:54:36.06065599 +0000 UTC m=+0.243536628 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Nov 23 03:54:36 localhost podman[102106]: unhealthy Nov 23 03:54:36 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:54:36 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:54:36 localhost podman[102105]: 2025-11-23 08:54:36.13600793 +0000 UTC m=+0.320075375 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container) Nov 23 03:54:36 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:54:36 localhost systemd[1]: tmp-crun.9EWwOk.mount: Deactivated successfully. Nov 23 03:54:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:54:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:54:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:54:57 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:54:57 localhost recover_tripleo_nova_virtqemud[102196]: 62093 Nov 23 03:54:57 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:54:57 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:54:57 localhost podman[102177]: 2025-11-23 08:54:57.903948508 +0000 UTC m=+0.075849487 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step5, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, container_name=nova_compute) Nov 23 03:54:57 localhost systemd[1]: tmp-crun.tOxVc0.mount: Deactivated successfully. Nov 23 03:54:57 localhost podman[102175]: 2025-11-23 08:54:57.953130392 +0000 UTC m=+0.133662626 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, batch=17.1_20251118.1, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-collectd-container) Nov 23 03:54:57 localhost podman[102177]: 2025-11-23 08:54:57.959281881 +0000 UTC m=+0.131182830 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute) Nov 23 03:54:57 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:54:57 localhost podman[102175]: 2025-11-23 08:54:57.990345427 +0000 UTC m=+0.170877651 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc.) Nov 23 03:54:58 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:54:58 localhost podman[102176]: 2025-11-23 08:54:58.010408496 +0000 UTC m=+0.185521183 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 23 03:54:58 localhost podman[102176]: 2025-11-23 08:54:58.024179249 +0000 UTC m=+0.199291926 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044) Nov 23 03:54:58 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:55:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:55:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:55:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:55:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:55:02 localhost podman[102244]: 2025-11-23 08:55:02.904871857 +0000 UTC m=+0.084495832 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc.) Nov 23 03:55:02 localhost podman[102243]: 2025-11-23 08:55:02.968691771 +0000 UTC m=+0.151933408 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible) Nov 23 03:55:03 localhost podman[102243]: 2025-11-23 08:55:03.001507862 +0000 UTC m=+0.184749439 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:55:03 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:55:03 localhost podman[102245]: 2025-11-23 08:55:03.020160727 +0000 UTC m=+0.193536240 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044) Nov 23 03:55:03 localhost podman[102249]: 2025-11-23 08:55:03.071159736 +0000 UTC m=+0.242306850 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, vcs-type=git, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public) Nov 23 03:55:03 localhost podman[102249]: 2025-11-23 08:55:03.078901775 +0000 UTC m=+0.250048859 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:55:03 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:55:03 localhost podman[102244]: 2025-11-23 08:55:03.092237326 +0000 UTC m=+0.271861281 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible) Nov 23 03:55:03 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:55:03 localhost podman[102245]: 2025-11-23 08:55:03.418588502 +0000 UTC m=+0.591964065 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:55:03 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:55:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:55:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:55:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:55:06 localhost systemd[1]: tmp-crun.WHGYgK.mount: Deactivated successfully. Nov 23 03:55:06 localhost podman[102337]: 2025-11-23 08:55:06.918219503 +0000 UTC m=+0.096611015 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, release=1761123044) Nov 23 03:55:06 localhost podman[102338]: 2025-11-23 08:55:06.963468246 +0000 UTC m=+0.138404441 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:55:07 localhost podman[102338]: 2025-11-23 08:55:07.003659744 +0000 UTC m=+0.178595869 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:55:07 localhost podman[102338]: unhealthy Nov 23 03:55:07 localhost podman[102339]: 2025-11-23 08:55:07.018841441 +0000 UTC m=+0.191476666 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, release=1761123044, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible) Nov 23 03:55:07 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:55:07 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:55:07 localhost podman[102339]: 2025-11-23 08:55:07.033529173 +0000 UTC m=+0.206164428 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true) Nov 23 03:55:07 localhost podman[102339]: unhealthy Nov 23 03:55:07 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:55:07 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:55:07 localhost podman[102337]: 2025-11-23 08:55:07.18769436 +0000 UTC m=+0.366085832 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vcs-type=git, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:55:07 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:55:07 localhost systemd[1]: tmp-crun.rIpAlO.mount: Deactivated successfully. Nov 23 03:55:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:55:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:55:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:55:28 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:55:28 localhost recover_tripleo_nova_virtqemud[102420]: 62093 Nov 23 03:55:28 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:55:28 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:55:28 localhost systemd[1]: tmp-crun.Br7M0M.mount: Deactivated successfully. Nov 23 03:55:28 localhost podman[102407]: 2025-11-23 08:55:28.964197509 +0000 UTC m=+0.142843098 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, container_name=iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:55:29 localhost podman[102407]: 2025-11-23 08:55:29.003375345 +0000 UTC m=+0.182020924 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, build-date=2025-11-18T23:44:13Z, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, maintainer=OpenStack TripleO Team) Nov 23 03:55:29 localhost podman[102408]: 2025-11-23 08:55:29.012573388 +0000 UTC m=+0.186872043 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-type=git) Nov 23 03:55:29 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:55:29 localhost podman[102406]: 2025-11-23 08:55:28.933054551 +0000 UTC m=+0.117026835 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git) Nov 23 03:55:29 localhost podman[102406]: 2025-11-23 08:55:29.068347135 +0000 UTC m=+0.252319369 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:51:28Z, version=17.1.12, com.redhat.component=openstack-collectd-container, container_name=collectd, architecture=x86_64, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible) Nov 23 03:55:29 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:55:29 localhost podman[102408]: 2025-11-23 08:55:29.118576642 +0000 UTC m=+0.292875317 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5) Nov 23 03:55:29 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:55:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:55:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:55:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:55:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:55:33 localhost podman[102548]: 2025-11-23 08:55:33.910205429 +0000 UTC m=+0.088331641 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, container_name=nova_migration_target) Nov 23 03:55:33 localhost podman[102547]: 2025-11-23 08:55:33.962042484 +0000 UTC m=+0.141953201 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:55:34 localhost systemd[1]: tmp-crun.9QibEA.mount: Deactivated successfully. Nov 23 03:55:34 localhost podman[102546]: 2025-11-23 08:55:34.010597759 +0000 UTC m=+0.191136175 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:55:34 localhost podman[102547]: 2025-11-23 08:55:34.018368648 +0000 UTC m=+0.198279315 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=) Nov 23 03:55:34 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:55:34 localhost podman[102549]: 2025-11-23 08:55:34.061826597 +0000 UTC m=+0.236977867 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:55:34 localhost podman[102546]: 2025-11-23 08:55:34.065962144 +0000 UTC m=+0.246500480 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 23 03:55:34 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:55:34 localhost podman[102549]: 2025-11-23 08:55:34.093684347 +0000 UTC m=+0.268835597 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1761123044) Nov 23 03:55:34 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:55:34 localhost podman[102548]: 2025-11-23 08:55:34.324528824 +0000 UTC m=+0.502654986 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, architecture=x86_64, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1) Nov 23 03:55:34 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:55:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:55:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:55:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:55:37 localhost systemd[1]: tmp-crun.tvyD3k.mount: Deactivated successfully. Nov 23 03:55:37 localhost podman[102641]: 2025-11-23 08:55:37.928039564 +0000 UTC m=+0.102641322 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4) Nov 23 03:55:37 localhost podman[102641]: 2025-11-23 08:55:37.972782191 +0000 UTC m=+0.147383959 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, architecture=x86_64, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc.) Nov 23 03:55:37 localhost podman[102641]: unhealthy Nov 23 03:55:38 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:55:38 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:55:38 localhost podman[102639]: 2025-11-23 08:55:38.070641904 +0000 UTC m=+0.249388949 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044) Nov 23 03:55:38 localhost podman[102640]: 2025-11-23 08:55:38.110505681 +0000 UTC m=+0.287064609 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container) Nov 23 03:55:38 localhost podman[102640]: 2025-11-23 08:55:38.132356604 +0000 UTC m=+0.308915522 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=ovn_controller, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, version=17.1.12) Nov 23 03:55:38 localhost podman[102640]: unhealthy Nov 23 03:55:38 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:55:38 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:55:38 localhost podman[102639]: 2025-11-23 08:55:38.335409465 +0000 UTC m=+0.514156470 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:55:38 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:55:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:55:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:55:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:55:59 localhost systemd[1]: tmp-crun.ROmFEd.mount: Deactivated successfully. Nov 23 03:55:59 localhost podman[102703]: 2025-11-23 08:55:59.918050245 +0000 UTC m=+0.099774662 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Nov 23 03:55:59 localhost podman[102703]: 2025-11-23 08:55:59.957468808 +0000 UTC m=+0.139193255 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=collectd) Nov 23 03:55:59 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:56:00 localhost podman[102704]: 2025-11-23 08:55:59.95915541 +0000 UTC m=+0.137392410 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, container_name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:56:00 localhost podman[102704]: 2025-11-23 08:56:00.044471688 +0000 UTC m=+0.222708668 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, container_name=iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:56:00 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:56:00 localhost podman[102705]: 2025-11-23 08:56:00.013503593 +0000 UTC m=+0.188329868 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:56:00 localhost podman[102705]: 2025-11-23 08:56:00.09752806 +0000 UTC m=+0.272354295 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1761123044, version=17.1.12, url=https://www.redhat.com, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:56:00 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:56:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:56:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:56:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:56:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:56:04 localhost systemd[1]: tmp-crun.nK7weQ.mount: Deactivated successfully. Nov 23 03:56:04 localhost podman[102767]: 2025-11-23 08:56:04.908671179 +0000 UTC m=+0.088135245 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:56:04 localhost podman[102766]: 2025-11-23 08:56:04.96363161 +0000 UTC m=+0.147810771 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 23 03:56:04 localhost podman[102767]: 2025-11-23 08:56:04.967363786 +0000 UTC m=+0.146827812 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 23 03:56:04 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:56:05 localhost podman[102771]: 2025-11-23 08:56:05.014129006 +0000 UTC m=+0.186644338 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, container_name=logrotate_crond, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git) Nov 23 03:56:05 localhost podman[102771]: 2025-11-23 08:56:05.02435327 +0000 UTC m=+0.196868652 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, release=1761123044, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4) Nov 23 03:56:05 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:56:05 localhost podman[102768]: 2025-11-23 08:56:05.071648946 +0000 UTC m=+0.246443718 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=nova_migration_target, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git) Nov 23 03:56:05 localhost podman[102766]: 2025-11-23 08:56:05.075173064 +0000 UTC m=+0.259352215 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:56:05 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:56:05 localhost podman[102768]: 2025-11-23 08:56:05.388837001 +0000 UTC m=+0.563631753 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, container_name=nova_migration_target, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 23 03:56:05 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:56:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:56:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:56:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:56:08 localhost podman[102861]: 2025-11-23 08:56:08.911392458 +0000 UTC m=+0.093693186 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step1, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public) Nov 23 03:56:08 localhost podman[102862]: 2025-11-23 08:56:08.961038136 +0000 UTC m=+0.139713691 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12) Nov 23 03:56:09 localhost podman[102862]: 2025-11-23 08:56:09.002481002 +0000 UTC m=+0.181156537 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, architecture=x86_64, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4) Nov 23 03:56:09 localhost podman[102862]: unhealthy Nov 23 03:56:09 localhost podman[102863]: 2025-11-23 08:56:09.013419469 +0000 UTC m=+0.190359201 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Nov 23 03:56:09 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:56:09 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:56:09 localhost podman[102863]: 2025-11-23 08:56:09.06053834 +0000 UTC m=+0.237478052 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, io.buildah.version=1.41.4) Nov 23 03:56:09 localhost podman[102863]: unhealthy Nov 23 03:56:09 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:56:09 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:56:09 localhost podman[102861]: 2025-11-23 08:56:09.127701628 +0000 UTC m=+0.310002316 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, config_id=tripleo_step1, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=) Nov 23 03:56:09 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:56:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:56:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:56:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:56:30 localhost systemd[1]: tmp-crun.APvGs3.mount: Deactivated successfully. Nov 23 03:56:30 localhost podman[102928]: 2025-11-23 08:56:30.907803477 +0000 UTC m=+0.096481552 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, container_name=collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:56:30 localhost podman[102928]: 2025-11-23 08:56:30.949497211 +0000 UTC m=+0.138175256 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-collectd) Nov 23 03:56:30 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:56:30 localhost podman[102930]: 2025-11-23 08:56:30.921162767 +0000 UTC m=+0.101579158 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, container_name=nova_compute, version=17.1.12, config_id=tripleo_step5, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 03:56:31 localhost podman[102929]: 2025-11-23 08:56:30.950229093 +0000 UTC m=+0.132790830 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, release=1761123044, version=17.1.12, vcs-type=git) Nov 23 03:56:31 localhost podman[102930]: 2025-11-23 08:56:31.00440614 +0000 UTC m=+0.184822521 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container) Nov 23 03:56:31 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:56:31 localhost podman[102929]: 2025-11-23 08:56:31.034380523 +0000 UTC m=+0.216942280 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, architecture=x86_64) Nov 23 03:56:31 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:56:33 localhost systemd[1]: tmp-crun.pvvB8H.mount: Deactivated successfully. Nov 23 03:56:33 localhost podman[103091]: 2025-11-23 08:56:33.21673245 +0000 UTC m=+0.112337019 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, version=7, name=rhceph, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, architecture=x86_64) Nov 23 03:56:33 localhost podman[103091]: 2025-11-23 08:56:33.352573332 +0000 UTC m=+0.248177891 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, architecture=x86_64, GIT_BRANCH=main, release=553, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , distribution-scope=public, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, RELEASE=main, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, version=7) Nov 23 03:56:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:56:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:56:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:56:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:56:35 localhost podman[103234]: 2025-11-23 08:56:35.926955478 +0000 UTC m=+0.100514886 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4) Nov 23 03:56:35 localhost podman[103237]: 2025-11-23 08:56:35.973841791 +0000 UTC m=+0.140585899 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-cron, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:56:35 localhost podman[103234]: 2025-11-23 08:56:35.981343453 +0000 UTC m=+0.154902851 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, vcs-type=git, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Nov 23 03:56:35 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:56:36 localhost podman[103237]: 2025-11-23 08:56:36.01117361 +0000 UTC m=+0.177917678 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Nov 23 03:56:36 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:56:36 localhost podman[103235]: 2025-11-23 08:56:36.084480428 +0000 UTC m=+0.256538079 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, release=1761123044) Nov 23 03:56:36 localhost podman[103235]: 2025-11-23 08:56:36.116693909 +0000 UTC m=+0.288751580 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:56:36 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:56:36 localhost podman[103236]: 2025-11-23 08:56:36.132068993 +0000 UTC m=+0.301569336 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:56:36 localhost podman[103236]: 2025-11-23 08:56:36.498589186 +0000 UTC m=+0.668089519 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, container_name=nova_migration_target, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:56:36 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:56:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:56:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:56:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:56:39 localhost podman[103330]: 2025-11-23 08:56:39.915305234 +0000 UTC m=+0.096637315 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044) Nov 23 03:56:39 localhost systemd[1]: tmp-crun.CiU5K0.mount: Deactivated successfully. Nov 23 03:56:39 localhost podman[103331]: 2025-11-23 08:56:39.968206403 +0000 UTC m=+0.145142290 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller) Nov 23 03:56:40 localhost podman[103332]: 2025-11-23 08:56:40.028416546 +0000 UTC m=+0.199473952 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:56:40 localhost podman[103331]: 2025-11-23 08:56:40.040719505 +0000 UTC m=+0.217655402 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team) Nov 23 03:56:40 localhost podman[103331]: unhealthy Nov 23 03:56:40 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:56:40 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:56:40 localhost podman[103332]: 2025-11-23 08:56:40.069748959 +0000 UTC m=+0.240806345 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4) Nov 23 03:56:40 localhost podman[103332]: unhealthy Nov 23 03:56:40 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:56:40 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:56:40 localhost podman[103330]: 2025-11-23 08:56:40.142973133 +0000 UTC m=+0.324305204 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., container_name=metrics_qdr, config_id=tripleo_step1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true) Nov 23 03:56:40 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:56:54 localhost sshd[103399]: main: sshd: ssh-rsa algorithm is disabled Nov 23 03:56:55 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:56:55 localhost recover_tripleo_nova_virtqemud[103402]: 62093 Nov 23 03:56:55 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:56:55 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:57:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:57:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:57:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:57:01 localhost podman[103405]: 2025-11-23 08:57:01.904215161 +0000 UTC m=+0.081217261 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, container_name=iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3) Nov 23 03:57:01 localhost podman[103405]: 2025-11-23 08:57:01.941699706 +0000 UTC m=+0.118701766 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, release=1761123044, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, distribution-scope=public) Nov 23 03:57:01 localhost systemd[1]: tmp-crun.jBRAyI.mount: Deactivated successfully. Nov 23 03:57:01 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:57:01 localhost podman[103404]: 2025-11-23 08:57:01.960220135 +0000 UTC m=+0.138433282 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:57:01 localhost podman[103404]: 2025-11-23 08:57:01.998446662 +0000 UTC m=+0.176659819 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:57:02 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:57:02 localhost podman[103406]: 2025-11-23 08:57:02.017059046 +0000 UTC m=+0.191420555 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step5, io.openshift.expose-services=, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, name=rhosp17/openstack-nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:57:02 localhost podman[103406]: 2025-11-23 08:57:02.049349359 +0000 UTC m=+0.223710868 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, container_name=nova_compute, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:57:02 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:57:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:57:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:57:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:57:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:57:06 localhost podman[103471]: 2025-11-23 08:57:06.903883543 +0000 UTC m=+0.083232703 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Nov 23 03:57:06 localhost podman[103470]: 2025-11-23 08:57:06.95640471 +0000 UTC m=+0.140851808 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container) Nov 23 03:57:06 localhost podman[103471]: 2025-11-23 08:57:06.98336584 +0000 UTC m=+0.162715050 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Nov 23 03:57:07 localhost podman[103470]: 2025-11-23 08:57:07.018305786 +0000 UTC m=+0.202752884 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, version=17.1.12, vcs-type=git) Nov 23 03:57:07 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:57:07 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:57:07 localhost podman[103473]: 2025-11-23 08:57:07.022915827 +0000 UTC m=+0.196069077 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, batch=17.1_20251118.1, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:57:07 localhost podman[103473]: 2025-11-23 08:57:07.106574273 +0000 UTC m=+0.279727533 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Nov 23 03:57:07 localhost podman[103472]: 2025-11-23 08:57:07.122803983 +0000 UTC m=+0.299283315 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1) Nov 23 03:57:07 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:57:07 localhost podman[103472]: 2025-11-23 08:57:07.526316005 +0000 UTC m=+0.702795347 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044) Nov 23 03:57:07 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:57:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:57:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:57:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:57:10 localhost podman[103565]: 2025-11-23 08:57:10.905167297 +0000 UTC m=+0.091953372 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:57:10 localhost podman[103566]: 2025-11-23 08:57:10.956149928 +0000 UTC m=+0.139448265 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:57:10 localhost podman[103566]: 2025-11-23 08:57:10.99877922 +0000 UTC m=+0.182077597 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=) Nov 23 03:57:11 localhost podman[103566]: unhealthy Nov 23 03:57:11 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:57:11 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:57:11 localhost podman[103567]: 2025-11-23 08:57:11.01180102 +0000 UTC m=+0.191513746 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, vcs-type=git, container_name=ovn_metadata_agent, vendor=Red Hat, Inc.) Nov 23 03:57:11 localhost podman[103567]: 2025-11-23 08:57:11.050320666 +0000 UTC m=+0.230033362 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, vcs-type=git) Nov 23 03:57:11 localhost podman[103567]: unhealthy Nov 23 03:57:11 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:57:11 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:57:11 localhost podman[103565]: 2025-11-23 08:57:11.093389352 +0000 UTC m=+0.280175407 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:57:11 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:57:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:57:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:57:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:57:32 localhost systemd[1]: tmp-crun.ns28xa.mount: Deactivated successfully. Nov 23 03:57:32 localhost podman[103632]: 2025-11-23 08:57:32.902754684 +0000 UTC m=+0.089799335 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible) Nov 23 03:57:32 localhost podman[103632]: 2025-11-23 08:57:32.942441526 +0000 UTC m=+0.129486167 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, tcib_managed=true, container_name=collectd, batch=17.1_20251118.1, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:57:32 localhost podman[103634]: 2025-11-23 08:57:32.953010572 +0000 UTC m=+0.131411467 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, release=1761123044, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Nov 23 03:57:32 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:57:33 localhost podman[103634]: 2025-11-23 08:57:33.005304061 +0000 UTC m=+0.183705006 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:57:33 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Deactivated successfully. Nov 23 03:57:33 localhost podman[103633]: 2025-11-23 08:57:33.008650094 +0000 UTC m=+0.188213095 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step3) Nov 23 03:57:33 localhost podman[103633]: 2025-11-23 08:57:33.091433453 +0000 UTC m=+0.270996414 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:57:33 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:57:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:57:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:57:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:57:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:57:37 localhost systemd[1]: tmp-crun.encZZ8.mount: Deactivated successfully. Nov 23 03:57:37 localhost podman[103771]: 2025-11-23 08:57:37.906406369 +0000 UTC m=+0.088117195 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-19T00:11:48Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.12) Nov 23 03:57:37 localhost podman[103772]: 2025-11-23 08:57:37.962082563 +0000 UTC m=+0.141573160 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, tcib_managed=true, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git) Nov 23 03:57:37 localhost podman[103772]: 2025-11-23 08:57:37.996582305 +0000 UTC m=+0.176072872 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true) Nov 23 03:57:38 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:57:38 localhost podman[103774]: 2025-11-23 08:57:38.014118885 +0000 UTC m=+0.188270838 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Nov 23 03:57:38 localhost podman[103774]: 2025-11-23 08:57:38.024376041 +0000 UTC m=+0.198527934 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, config_id=tripleo_step4, name=rhosp17/openstack-cron, distribution-scope=public, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, release=1761123044, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Nov 23 03:57:38 localhost podman[103771]: 2025-11-23 08:57:38.037099892 +0000 UTC m=+0.218810708 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, build-date=2025-11-19T00:11:48Z, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:57:38 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:57:38 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Deactivated successfully. Nov 23 03:57:38 localhost podman[103773]: 2025-11-23 08:57:37.998728971 +0000 UTC m=+0.177969130 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:57:38 localhost podman[103773]: 2025-11-23 08:57:38.382433384 +0000 UTC m=+0.561673583 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, architecture=x86_64, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20251118.1) Nov 23 03:57:38 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:57:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:57:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:57:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:57:41 localhost podman[103865]: 2025-11-23 08:57:41.906942871 +0000 UTC m=+0.088449365 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, release=1761123044, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:57:41 localhost podman[103865]: 2025-11-23 08:57:41.95141895 +0000 UTC m=+0.132925474 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true) Nov 23 03:57:41 localhost podman[103865]: unhealthy Nov 23 03:57:41 localhost podman[103864]: 2025-11-23 08:57:41.965473903 +0000 UTC m=+0.148367199 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, release=1761123044, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:57:41 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:57:41 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:57:42 localhost podman[103866]: 2025-11-23 08:57:42.02063884 +0000 UTC m=+0.197312425 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Nov 23 03:57:42 localhost podman[103866]: 2025-11-23 08:57:42.041248486 +0000 UTC m=+0.217922051 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Nov 23 03:57:42 localhost podman[103866]: unhealthy Nov 23 03:57:42 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:57:42 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:57:42 localhost podman[103864]: 2025-11-23 08:57:42.175976783 +0000 UTC m=+0.358870079 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12) Nov 23 03:57:42 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:57:42 localhost systemd[1]: tmp-crun.v1fzMv.mount: Deactivated successfully. Nov 23 03:58:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:58:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:58:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:58:03 localhost systemd[1]: tmp-crun.2YsglL.mount: Deactivated successfully. Nov 23 03:58:03 localhost podman[103930]: 2025-11-23 08:58:03.914474454 +0000 UTC m=+0.099426063 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-iscsid-container) Nov 23 03:58:03 localhost systemd[1]: tmp-crun.ZAlXF4.mount: Deactivated successfully. Nov 23 03:58:03 localhost podman[103930]: 2025-11-23 08:58:03.946456978 +0000 UTC m=+0.131408567 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:58:03 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:58:04 localhost podman[103929]: 2025-11-23 08:58:03.950962517 +0000 UTC m=+0.139713972 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:58:04 localhost podman[103931]: 2025-11-23 08:58:04.028827935 +0000 UTC m=+0.211076551 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step5, distribution-scope=public, version=17.1.12, container_name=nova_compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 23 03:58:04 localhost podman[103929]: 2025-11-23 08:58:04.038405219 +0000 UTC m=+0.227156704 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_id=tripleo_step3, tcib_managed=true, container_name=collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:58:04 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:58:04 localhost podman[103931]: 2025-11-23 08:58:04.05857308 +0000 UTC m=+0.240821696 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team) Nov 23 03:58:04 localhost podman[103931]: unhealthy Nov 23 03:58:04 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:58:04 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed with result 'exit-code'. Nov 23 03:58:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:58:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:58:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:58:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:58:08 localhost podman[103990]: 2025-11-23 08:58:08.909313858 +0000 UTC m=+0.090142876 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:58:08 localhost podman[103990]: 2025-11-23 08:58:08.940472487 +0000 UTC m=+0.121301495 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 03:58:08 localhost systemd[1]: tmp-crun.0s2KjW.mount: Deactivated successfully. Nov 23 03:58:08 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:58:08 localhost podman[103991]: 2025-11-23 08:58:08.958640906 +0000 UTC m=+0.136839683 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Nov 23 03:58:09 localhost podman[103992]: 2025-11-23 08:58:09.014230907 +0000 UTC m=+0.185790810 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.component=openstack-cron-container) Nov 23 03:58:09 localhost podman[103989]: 2025-11-23 08:58:09.098649836 +0000 UTC m=+0.283727226 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team) Nov 23 03:58:09 localhost podman[103992]: 2025-11-23 08:58:09.108891992 +0000 UTC m=+0.280451885 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, release=1761123044, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20251118.1, io.buildah.version=1.41.4) Nov 23 03:58:09 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:58:09 localhost podman[103989]: 2025-11-23 08:58:09.131301672 +0000 UTC m=+0.316379102 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible) Nov 23 03:58:09 localhost podman[103989]: unhealthy Nov 23 03:58:09 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:58:09 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Failed with result 'exit-code'. Nov 23 03:58:09 localhost podman[103991]: 2025-11-23 08:58:09.54149803 +0000 UTC m=+0.719696827 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1761123044, version=17.1.12, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Nov 23 03:58:09 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:58:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:58:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:58:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:58:12 localhost podman[104084]: 2025-11-23 08:58:12.906155355 +0000 UTC m=+0.090451095 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc.) Nov 23 03:58:12 localhost podman[104085]: 2025-11-23 08:58:12.961182809 +0000 UTC m=+0.143538099 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-18T23:34:05Z, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Nov 23 03:58:13 localhost podman[104086]: 2025-11-23 08:58:13.009094354 +0000 UTC m=+0.189296748 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4) Nov 23 03:58:13 localhost podman[104085]: 2025-11-23 08:58:13.028680247 +0000 UTC m=+0.211035567 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team) Nov 23 03:58:13 localhost podman[104085]: unhealthy Nov 23 03:58:13 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:58:13 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:58:13 localhost podman[104086]: 2025-11-23 08:58:13.048533768 +0000 UTC m=+0.228736192 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=) Nov 23 03:58:13 localhost podman[104086]: unhealthy Nov 23 03:58:13 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:58:13 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:58:13 localhost podman[104084]: 2025-11-23 08:58:13.101284923 +0000 UTC m=+0.285580683 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:58:13 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:58:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:58:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:58:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:58:34 localhost systemd[1]: tmp-crun.jEAcED.mount: Deactivated successfully. Nov 23 03:58:34 localhost podman[104152]: 2025-11-23 08:58:34.914089979 +0000 UTC m=+0.100606709 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, distribution-scope=public, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 03:58:34 localhost podman[104154]: 2025-11-23 08:58:34.952427548 +0000 UTC m=+0.130636162 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1) Nov 23 03:58:34 localhost podman[104152]: 2025-11-23 08:58:34.977349656 +0000 UTC m=+0.163866466 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_step3, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-collectd-container) Nov 23 03:58:34 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:58:34 localhost podman[104153]: 2025-11-23 08:58:34.995954669 +0000 UTC m=+0.179662642 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, distribution-scope=public, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, container_name=iscsid, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1) Nov 23 03:58:35 localhost podman[104154]: 2025-11-23 08:58:35.001572772 +0000 UTC m=+0.179781386 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, container_name=nova_compute, architecture=x86_64, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 23 03:58:35 localhost podman[104154]: unhealthy Nov 23 03:58:35 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:58:35 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed with result 'exit-code'. Nov 23 03:58:35 localhost podman[104153]: 2025-11-23 08:58:35.03335896 +0000 UTC m=+0.217066923 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public) Nov 23 03:58:35 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:58:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:58:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:58:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:58:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:58:39 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 03:58:39 localhost recover_tripleo_nova_virtqemud[104314]: 62093 Nov 23 03:58:39 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 03:58:39 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 03:58:39 localhost podman[104289]: 2025-11-23 08:58:39.908153626 +0000 UTC m=+0.093967144 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute) Nov 23 03:58:39 localhost podman[104290]: 2025-11-23 08:58:39.959707393 +0000 UTC m=+0.145022976 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 23 03:58:39 localhost podman[104289]: 2025-11-23 08:58:39.9881831 +0000 UTC m=+0.173996588 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1) Nov 23 03:58:39 localhost podman[104289]: unhealthy Nov 23 03:58:40 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:58:40 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Failed with result 'exit-code'. Nov 23 03:58:40 localhost podman[104290]: 2025-11-23 08:58:40.022428684 +0000 UTC m=+0.207744257 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-type=git) Nov 23 03:58:40 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:58:40 localhost podman[104292]: 2025-11-23 08:58:40.059968149 +0000 UTC m=+0.239908886 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, version=17.1.12, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com) Nov 23 03:58:40 localhost podman[104291]: 2025-11-23 08:58:40.112123245 +0000 UTC m=+0.293409914 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., vcs-type=git, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, architecture=x86_64) Nov 23 03:58:40 localhost podman[104292]: 2025-11-23 08:58:40.142253093 +0000 UTC m=+0.322193860 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Nov 23 03:58:40 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:58:40 localhost podman[104291]: 2025-11-23 08:58:40.485471839 +0000 UTC m=+0.666758518 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, container_name=nova_migration_target, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com) Nov 23 03:58:40 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:58:40 localhost systemd[1]: tmp-crun.LlLFpX.mount: Deactivated successfully. Nov 23 03:58:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:58:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:58:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:58:43 localhost podman[104386]: 2025-11-23 08:58:43.901305821 +0000 UTC m=+0.082622485 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 23 03:58:43 localhost podman[104386]: 2025-11-23 08:58:43.941270081 +0000 UTC m=+0.122586725 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, release=1761123044, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn) Nov 23 03:58:43 localhost podman[104386]: unhealthy Nov 23 03:58:43 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:58:43 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:58:43 localhost podman[104384]: 2025-11-23 08:58:43.957884132 +0000 UTC m=+0.142982012 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 03:58:44 localhost podman[104385]: 2025-11-23 08:58:44.009615595 +0000 UTC m=+0.192437045 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64) Nov 23 03:58:44 localhost podman[104385]: 2025-11-23 08:58:44.052089432 +0000 UTC m=+0.234910922 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, architecture=x86_64, config_id=tripleo_step4) Nov 23 03:58:44 localhost podman[104385]: unhealthy Nov 23 03:58:44 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:58:44 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:58:44 localhost podman[104384]: 2025-11-23 08:58:44.166956868 +0000 UTC m=+0.352054778 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, batch=17.1_20251118.1) Nov 23 03:58:44 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:59:03 localhost sshd[104454]: main: sshd: ssh-rsa algorithm is disabled Nov 23 03:59:03 localhost systemd-logind[760]: New session 35 of user zuul. Nov 23 03:59:03 localhost systemd[1]: Started Session 35 of User zuul. Nov 23 03:59:04 localhost python3.9[104549]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 03:59:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:59:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:59:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:59:05 localhost systemd[1]: tmp-crun.O3SLJg.mount: Deactivated successfully. Nov 23 03:59:05 localhost systemd[1]: tmp-crun.5btlYs.mount: Deactivated successfully. Nov 23 03:59:05 localhost podman[104645]: 2025-11-23 08:59:05.546649941 +0000 UTC m=+0.082708047 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, vcs-type=git, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com) Nov 23 03:59:05 localhost podman[104645]: 2025-11-23 08:59:05.555180154 +0000 UTC m=+0.091238300 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044) Nov 23 03:59:05 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:59:05 localhost podman[104643]: 2025-11-23 08:59:05.612641033 +0000 UTC m=+0.148741020 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public) Nov 23 03:59:05 localhost podman[104643]: 2025-11-23 08:59:05.619669209 +0000 UTC m=+0.155769166 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20251118.1) Nov 23 03:59:05 localhost podman[104646]: 2025-11-23 08:59:05.529371219 +0000 UTC m=+0.067766667 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team) Nov 23 03:59:05 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:59:05 localhost python3.9[104644]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 03:59:05 localhost podman[104646]: 2025-11-23 08:59:05.666303295 +0000 UTC m=+0.204698773 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, container_name=nova_compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:59:05 localhost podman[104646]: unhealthy Nov 23 03:59:05 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:59:05 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed with result 'exit-code'. Nov 23 03:59:06 localhost python3.9[104797]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 03:59:07 localhost python3.9[104891]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 03:59:07 localhost python3.9[104984]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 03:59:08 localhost python3.9[105075]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Nov 23 03:59:10 localhost python3.9[105165]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 03:59:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:59:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:59:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:59:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:59:10 localhost podman[105168]: 2025-11-23 08:59:10.91251006 +0000 UTC m=+0.092081306 container health_status 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:59:10 localhost podman[105170]: 2025-11-23 08:59:10.887242992 +0000 UTC m=+0.064538508 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 03:59:10 localhost podman[105169]: 2025-11-23 08:59:10.955804083 +0000 UTC m=+0.133106219 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12) Nov 23 03:59:10 localhost podman[105168]: 2025-11-23 08:59:10.971556888 +0000 UTC m=+0.151128134 container exec_died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 03:59:10 localhost podman[105168]: unhealthy Nov 23 03:59:10 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:59:10 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Failed with result 'exit-code'. Nov 23 03:59:11 localhost podman[105172]: 2025-11-23 08:59:11.012386234 +0000 UTC m=+0.182977404 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, distribution-scope=public, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 03:59:11 localhost podman[105169]: 2025-11-23 08:59:11.042286745 +0000 UTC m=+0.219588821 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Nov 23 03:59:11 localhost podman[105172]: 2025-11-23 08:59:11.050255561 +0000 UTC m=+0.220846731 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Nov 23 03:59:11 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:59:11 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:59:11 localhost podman[105170]: 2025-11-23 08:59:11.237395162 +0000 UTC m=+0.414690738 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, version=17.1.12, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, name=rhosp17/openstack-nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Nov 23 03:59:11 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:59:11 localhost python3.9[105350]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Nov 23 03:59:12 localhost python3.9[105440]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 03:59:13 localhost python3.9[105488]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 03:59:13 localhost systemd[1]: session-35.scope: Deactivated successfully. Nov 23 03:59:13 localhost systemd[1]: session-35.scope: Consumed 4.677s CPU time. Nov 23 03:59:13 localhost systemd-logind[760]: Session 35 logged out. Waiting for processes to exit. Nov 23 03:59:13 localhost systemd-logind[760]: Removed session 35. Nov 23 03:59:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:59:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:59:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:59:14 localhost podman[105505]: 2025-11-23 08:59:14.895641506 +0000 UTC m=+0.081568253 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible) Nov 23 03:59:14 localhost podman[105505]: 2025-11-23 08:59:14.936534424 +0000 UTC m=+0.122461161 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, distribution-scope=public, container_name=ovn_controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:59:14 localhost podman[105505]: unhealthy Nov 23 03:59:14 localhost podman[105504]: 2025-11-23 08:59:14.946159641 +0000 UTC m=+0.134850373 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, url=https://www.redhat.com, vcs-type=git, version=17.1.12, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4) Nov 23 03:59:14 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:59:14 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:59:15 localhost podman[105506]: 2025-11-23 08:59:15.001214325 +0000 UTC m=+0.184024255 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, tcib_managed=true, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, distribution-scope=public) Nov 23 03:59:15 localhost podman[105506]: 2025-11-23 08:59:15.013807574 +0000 UTC m=+0.196617494 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc.) Nov 23 03:59:15 localhost podman[105506]: unhealthy Nov 23 03:59:15 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:59:15 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:59:15 localhost podman[105504]: 2025-11-23 08:59:15.135403547 +0000 UTC m=+0.324094239 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:59:15 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:59:20 localhost sshd[105573]: main: sshd: ssh-rsa algorithm is disabled Nov 23 03:59:20 localhost systemd-logind[760]: New session 36 of user zuul. Nov 23 03:59:20 localhost systemd[1]: Started Session 36 of User zuul. Nov 23 03:59:22 localhost python3.9[105668]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 03:59:22 localhost systemd[1]: Reloading. Nov 23 03:59:22 localhost systemd-rc-local-generator[105691]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:59:22 localhost systemd-sysv-generator[105698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:59:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:59:23 localhost python3.9[105794]: ansible-ansible.builtin.service_facts Invoked Nov 23 03:59:23 localhost network[105811]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 23 03:59:23 localhost network[105812]: 'network-scripts' will be removed from distribution in near future. Nov 23 03:59:23 localhost network[105813]: It is advised to switch to 'NetworkManager' instead for network management. Nov 23 03:59:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:59:27 localhost python3.9[106011]: ansible-ansible.builtin.service_facts Invoked Nov 23 03:59:27 localhost network[106028]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 23 03:59:27 localhost network[106029]: 'network-scripts' will be removed from distribution in near future. Nov 23 03:59:27 localhost network[106030]: It is advised to switch to 'NetworkManager' instead for network management. Nov 23 03:59:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:59:31 localhost python3.9[106230]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 03:59:32 localhost systemd[1]: Reloading. Nov 23 03:59:32 localhost systemd-rc-local-generator[106256]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 03:59:32 localhost systemd-sysv-generator[106261]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 03:59:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 03:59:32 localhost systemd[1]: Stopping ceilometer_agent_compute container... Nov 23 03:59:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 03:59:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 03:59:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 03:59:35 localhost podman[106285]: 2025-11-23 08:59:35.90445355 +0000 UTC m=+0.083948765 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, managed_by=tripleo_ansible, container_name=collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z) Nov 23 03:59:35 localhost podman[106285]: 2025-11-23 08:59:35.939237921 +0000 UTC m=+0.118733156 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Nov 23 03:59:35 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 03:59:36 localhost podman[106286]: 2025-11-23 08:59:35.951791918 +0000 UTC m=+0.127857678 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:59:36 localhost podman[106287]: 2025-11-23 08:59:36.006809452 +0000 UTC m=+0.179794247 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public) Nov 23 03:59:36 localhost podman[106286]: 2025-11-23 08:59:36.031351597 +0000 UTC m=+0.207417347 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, release=1761123044, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 03:59:36 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 03:59:36 localhost podman[106287]: 2025-11-23 08:59:36.047532735 +0000 UTC m=+0.220517500 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 03:59:36 localhost podman[106287]: unhealthy Nov 23 03:59:36 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:59:36 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed with result 'exit-code'. Nov 23 03:59:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12794 DF PROTO=TCP SPT=46266 DPT=9105 SEQ=2001224173 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5E72E70000000001030307) Nov 23 03:59:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12795 DF PROTO=TCP SPT=46266 DPT=9105 SEQ=2001224173 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5E77100000000001030307) Nov 23 03:59:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12796 DF PROTO=TCP SPT=46266 DPT=9105 SEQ=2001224173 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5E7F0F0000000001030307) Nov 23 03:59:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 03:59:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 03:59:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 03:59:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 03:59:41 localhost podman[106422]: Error: container 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 is not running Nov 23 03:59:41 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Main process exited, code=exited, status=125/n/a Nov 23 03:59:41 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Failed with result 'exit-code'. Nov 23 03:59:41 localhost systemd[1]: tmp-crun.NUFw9v.mount: Deactivated successfully. Nov 23 03:59:41 localhost podman[106424]: 2025-11-23 08:59:41.47447478 +0000 UTC m=+0.152909378 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Nov 23 03:59:41 localhost podman[106423]: 2025-11-23 08:59:41.449824661 +0000 UTC m=+0.132493470 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, build-date=2025-11-19T00:12:45Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4) Nov 23 03:59:41 localhost podman[106426]: 2025-11-23 08:59:41.426671658 +0000 UTC m=+0.102590749 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.12) Nov 23 03:59:41 localhost podman[106423]: 2025-11-23 08:59:41.533348382 +0000 UTC m=+0.216017191 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team) Nov 23 03:59:41 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 03:59:41 localhost podman[106426]: 2025-11-23 08:59:41.55568044 +0000 UTC m=+0.231599541 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, container_name=logrotate_crond, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, release=1761123044, url=https://www.redhat.com) Nov 23 03:59:41 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 03:59:41 localhost podman[106424]: 2025-11-23 08:59:41.829074457 +0000 UTC m=+0.507509075 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, release=1761123044, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, distribution-scope=public, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 03:59:41 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 03:59:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12797 DF PROTO=TCP SPT=46266 DPT=9105 SEQ=2001224173 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5E8ECF0000000001030307) Nov 23 03:59:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 03:59:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 03:59:45 localhost podman[106497]: 2025-11-23 08:59:45.153108782 +0000 UTC m=+0.087406532 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:59:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 03:59:45 localhost podman[106497]: 2025-11-23 08:59:45.195400995 +0000 UTC m=+0.129698705 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 03:59:45 localhost podman[106497]: unhealthy Nov 23 03:59:45 localhost systemd[1]: tmp-crun.tVSmQ1.mount: Deactivated successfully. Nov 23 03:59:45 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:59:45 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 03:59:45 localhost podman[106498]: 2025-11-23 08:59:45.220358392 +0000 UTC m=+0.153539407 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64) Nov 23 03:59:45 localhost podman[106498]: 2025-11-23 08:59:45.239380628 +0000 UTC m=+0.172561683 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Nov 23 03:59:45 localhost podman[106498]: unhealthy Nov 23 03:59:45 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 03:59:45 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 03:59:45 localhost podman[106529]: 2025-11-23 08:59:45.334889828 +0000 UTC m=+0.145334195 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, release=1761123044, architecture=x86_64, tcib_managed=true, vcs-type=git, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Nov 23 03:59:45 localhost podman[106529]: 2025-11-23 08:59:45.56361504 +0000 UTC m=+0.374059407 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1) Nov 23 03:59:45 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 03:59:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39871 DF PROTO=TCP SPT=56304 DPT=9882 SEQ=3371040507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5E9B620000000001030307) Nov 23 03:59:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39872 DF PROTO=TCP SPT=56304 DPT=9882 SEQ=3371040507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5E9F4F0000000001030307) Nov 23 03:59:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17235 DF PROTO=TCP SPT=43624 DPT=9102 SEQ=1401487438 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EA4430000000001030307) Nov 23 03:59:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39873 DF PROTO=TCP SPT=56304 DPT=9882 SEQ=3371040507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EA74F0000000001030307) Nov 23 03:59:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17236 DF PROTO=TCP SPT=43624 DPT=9102 SEQ=1401487438 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EA84F0000000001030307) Nov 23 03:59:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30750 DF PROTO=TCP SPT=52296 DPT=9101 SEQ=1613090999 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EA8D80000000001030307) Nov 23 03:59:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30751 DF PROTO=TCP SPT=52296 DPT=9101 SEQ=1613090999 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EACCF0000000001030307) Nov 23 03:59:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12798 DF PROTO=TCP SPT=46266 DPT=9105 SEQ=2001224173 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EB0100000000001030307) Nov 23 03:59:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17237 DF PROTO=TCP SPT=43624 DPT=9102 SEQ=1401487438 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EB04F0000000001030307) Nov 23 03:59:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30752 DF PROTO=TCP SPT=52296 DPT=9101 SEQ=1613090999 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EB4CF0000000001030307) Nov 23 03:59:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39874 DF PROTO=TCP SPT=56304 DPT=9882 SEQ=3371040507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EB7100000000001030307) Nov 23 03:59:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17238 DF PROTO=TCP SPT=43624 DPT=9102 SEQ=1401487438 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EC00F0000000001030307) Nov 23 03:59:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30753 DF PROTO=TCP SPT=52296 DPT=9101 SEQ=1613090999 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EC4900000000001030307) Nov 23 04:00:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43486 DF PROTO=TCP SPT=58914 DPT=9100 SEQ=165449635 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5ECE8D0000000001030307) Nov 23 04:00:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43487 DF PROTO=TCP SPT=58914 DPT=9100 SEQ=165449635 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5ED28F0000000001030307) Nov 23 04:00:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39875 DF PROTO=TCP SPT=56304 DPT=9882 SEQ=3371040507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5ED80F0000000001030307) Nov 23 04:00:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43488 DF PROTO=TCP SPT=58914 DPT=9100 SEQ=165449635 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EDA8F0000000001030307) Nov 23 04:00:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17239 DF PROTO=TCP SPT=43624 DPT=9102 SEQ=1401487438 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EE0100000000001030307) Nov 23 04:00:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30754 DF PROTO=TCP SPT=52296 DPT=9101 SEQ=1613090999 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EE40F0000000001030307) Nov 23 04:00:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 04:00:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 04:00:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 04:00:06 localhost podman[106571]: 2025-11-23 09:00:06.407444765 +0000 UTC m=+0.093100067 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, name=rhosp17/openstack-collectd, config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd) Nov 23 04:00:06 localhost podman[106572]: 2025-11-23 09:00:06.453189744 +0000 UTC m=+0.137475763 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, architecture=x86_64, container_name=iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 23 04:00:06 localhost podman[106572]: 2025-11-23 09:00:06.486748787 +0000 UTC m=+0.171034786 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public) Nov 23 04:00:06 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 04:00:06 localhost podman[106573]: 2025-11-23 09:00:06.503205724 +0000 UTC m=+0.181997244 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, container_name=nova_compute, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Nov 23 04:00:06 localhost podman[106571]: 2025-11-23 09:00:06.522729795 +0000 UTC m=+0.208385087 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, tcib_managed=true, managed_by=tripleo_ansible, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step3, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4) Nov 23 04:00:06 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 04:00:06 localhost podman[106573]: 2025-11-23 09:00:06.575698356 +0000 UTC m=+0.254489926 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 04:00:06 localhost podman[106573]: unhealthy Nov 23 04:00:06 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:00:06 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed with result 'exit-code'. Nov 23 04:00:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56185 DF PROTO=TCP SPT=46604 DPT=9105 SEQ=3925028919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EE8170000000001030307) Nov 23 04:00:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43489 DF PROTO=TCP SPT=58914 DPT=9100 SEQ=165449635 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EEA500000000001030307) Nov 23 04:00:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56187 DF PROTO=TCP SPT=46604 DPT=9105 SEQ=3925028919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5EF40F0000000001030307) Nov 23 04:00:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 04:00:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 04:00:11 localhost podman[106632]: Error: container 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 is not running Nov 23 04:00:11 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Main process exited, code=exited, status=125/n/a Nov 23 04:00:11 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Failed with result 'exit-code'. Nov 23 04:00:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 04:00:11 localhost podman[106656]: 2025-11-23 09:00:11.737172148 +0000 UTC m=+0.082915295 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12) Nov 23 04:00:11 localhost podman[106633]: 2025-11-23 09:00:11.705335178 +0000 UTC m=+0.137266987 container health_status 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 23 04:00:11 localhost podman[106656]: 2025-11-23 09:00:11.772374982 +0000 UTC m=+0.118118139 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Nov 23 04:00:11 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 04:00:11 localhost podman[106633]: 2025-11-23 09:00:11.78858084 +0000 UTC m=+0.220512639 container exec_died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 23 04:00:11 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Deactivated successfully. Nov 23 04:00:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 04:00:12 localhost systemd[1]: tmp-crun.4hyCTm.mount: Deactivated successfully. Nov 23 04:00:12 localhost podman[106689]: 2025-11-23 09:00:12.906297341 +0000 UTC m=+0.084742950 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044) Nov 23 04:00:13 localhost podman[106689]: 2025-11-23 09:00:13.291204151 +0000 UTC m=+0.469649750 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 04:00:13 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 04:00:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56188 DF PROTO=TCP SPT=46604 DPT=9105 SEQ=3925028919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5F03CF0000000001030307) Nov 23 04:00:14 localhost podman[106271]: time="2025-11-23T09:00:14Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL" Nov 23 04:00:14 localhost systemd[1]: libpod-131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.scope: Deactivated successfully. Nov 23 04:00:14 localhost systemd[1]: libpod-131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.scope: Consumed 6.002s CPU time. Nov 23 04:00:14 localhost podman[106271]: 2025-11-23 09:00:14.949224835 +0000 UTC m=+42.099377685 container died 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, container_name=ceilometer_agent_compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4) Nov 23 04:00:14 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.timer: Deactivated successfully. Nov 23 04:00:14 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182. Nov 23 04:00:14 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Failed to open /run/systemd/transient/131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: No such file or directory Nov 23 04:00:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182-userdata-shm.mount: Deactivated successfully. Nov 23 04:00:15 localhost podman[106271]: 2025-11-23 09:00:15.008163959 +0000 UTC m=+42.158316729 container cleanup 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, config_id=tripleo_step4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true) Nov 23 04:00:15 localhost podman[106271]: ceilometer_agent_compute Nov 23 04:00:15 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.timer: Failed to open /run/systemd/transient/131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.timer: No such file or directory Nov 23 04:00:15 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Failed to open /run/systemd/transient/131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: No such file or directory Nov 23 04:00:15 localhost podman[106713]: 2025-11-23 09:00:15.081652051 +0000 UTC m=+0.121397248 container cleanup 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git) Nov 23 04:00:15 localhost systemd[1]: libpod-conmon-131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.scope: Deactivated successfully. Nov 23 04:00:15 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.timer: Failed to open /run/systemd/transient/131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.timer: No such file or directory Nov 23 04:00:15 localhost systemd[1]: 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: Failed to open /run/systemd/transient/131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182.service: No such file or directory Nov 23 04:00:15 localhost podman[106726]: 2025-11-23 09:00:15.179483983 +0000 UTC m=+0.068218912 container cleanup 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 04:00:15 localhost podman[106726]: ceilometer_agent_compute Nov 23 04:00:15 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully. Nov 23 04:00:15 localhost systemd[1]: Stopped ceilometer_agent_compute container. Nov 23 04:00:15 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.085s CPU time, no IO. Nov 23 04:00:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 04:00:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 04:00:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 04:00:15 localhost podman[106831]: 2025-11-23 09:00:15.749142291 +0000 UTC m=+0.088259208 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., container_name=metrics_qdr, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 04:00:15 localhost podman[106832]: 2025-11-23 09:00:15.805923538 +0000 UTC m=+0.141644981 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, tcib_managed=true, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64) Nov 23 04:00:15 localhost podman[106833]: 2025-11-23 09:00:15.845544729 +0000 UTC m=+0.177197797 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=ovn_metadata_agent, config_id=tripleo_step4, managed_by=tripleo_ansible) Nov 23 04:00:15 localhost podman[106832]: 2025-11-23 09:00:15.872938662 +0000 UTC m=+0.208660105 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Nov 23 04:00:15 localhost podman[106832]: unhealthy Nov 23 04:00:15 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:00:15 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 04:00:15 localhost podman[106833]: 2025-11-23 09:00:15.895164456 +0000 UTC m=+0.226817494 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 04:00:15 localhost podman[106833]: unhealthy Nov 23 04:00:15 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:00:15 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 04:00:15 localhost systemd[1]: var-lib-containers-storage-overlay-b0121206d5651924911ca1f3a8713f12deef5f18ac2e527773cfa01e24243f70-merged.mount: Deactivated successfully. Nov 23 04:00:15 localhost python3.9[106830]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:00:15 localhost podman[106831]: 2025-11-23 09:00:15.963430868 +0000 UTC m=+0.302547775 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, release=1761123044, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git) Nov 23 04:00:15 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 04:00:17 localhost systemd[1]: Reloading. Nov 23 04:00:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44073 DF PROTO=TCP SPT=50832 DPT=9882 SEQ=1583581281 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5F10930000000001030307) Nov 23 04:00:17 localhost systemd-rc-local-generator[106926]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:00:17 localhost systemd-sysv-generator[106929]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:00:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:00:17 localhost systemd[1]: Stopping ceilometer_agent_ipmi container... Nov 23 04:00:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39876 DF PROTO=TCP SPT=56304 DPT=9882 SEQ=3371040507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5F180F0000000001030307) Nov 23 04:00:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30755 DF PROTO=TCP SPT=52296 DPT=9101 SEQ=1613090999 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5F240F0000000001030307) Nov 23 04:00:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61612 DF PROTO=TCP SPT=54738 DPT=9102 SEQ=2409803928 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5F35500000000001030307) Nov 23 04:00:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1276 DF PROTO=TCP SPT=58368 DPT=9100 SEQ=481273203 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5F43BE0000000001030307) Nov 23 04:00:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1277 DF PROTO=TCP SPT=58368 DPT=9100 SEQ=481273203 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5F47CF0000000001030307) Nov 23 04:00:32 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 04:00:32 localhost recover_tripleo_nova_virtqemud[106952]: 62093 Nov 23 04:00:32 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 04:00:32 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 04:00:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61613 DF PROTO=TCP SPT=54738 DPT=9102 SEQ=2409803928 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5F560F0000000001030307) Nov 23 04:00:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 04:00:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 04:00:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 04:00:36 localhost podman[106953]: 2025-11-23 09:00:36.909380767 +0000 UTC m=+0.086299348 container health_status 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, container_name=collectd, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd) Nov 23 04:00:36 localhost podman[106953]: 2025-11-23 09:00:36.919619701 +0000 UTC m=+0.096538232 container exec_died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Nov 23 04:00:36 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Deactivated successfully. Nov 23 04:00:36 localhost systemd[1]: tmp-crun.KQt4x4.mount: Deactivated successfully. Nov 23 04:00:36 localhost podman[106955]: 2025-11-23 09:00:36.979130623 +0000 UTC m=+0.149196184 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5) Nov 23 04:00:37 localhost podman[106955]: 2025-11-23 09:00:37.001371028 +0000 UTC m=+0.171436599 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, managed_by=tripleo_ansible, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Nov 23 04:00:37 localhost podman[106955]: unhealthy Nov 23 04:00:37 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:00:37 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed with result 'exit-code'. Nov 23 04:00:37 localhost podman[106954]: 2025-11-23 09:00:37.066340559 +0000 UTC m=+0.241988832 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, architecture=x86_64, tcib_managed=true, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step3, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 04:00:37 localhost podman[106954]: 2025-11-23 09:00:37.10340207 +0000 UTC m=+0.279050333 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, config_id=tripleo_step3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, url=https://www.redhat.com) Nov 23 04:00:37 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 04:00:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1279 DF PROTO=TCP SPT=58368 DPT=9100 SEQ=481273203 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5F5F8F0000000001030307) Nov 23 04:00:37 localhost systemd[1]: tmp-crun.OdtP8x.mount: Deactivated successfully. Nov 23 04:00:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48989 DF PROTO=TCP SPT=52178 DPT=9105 SEQ=1210821248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5F694F0000000001030307) Nov 23 04:00:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 04:00:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 04:00:41 localhost podman[107094]: 2025-11-23 09:00:41.892036984 +0000 UTC m=+0.073382320 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 04:00:41 localhost podman[107094]: 2025-11-23 09:00:41.904329753 +0000 UTC m=+0.085675109 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Nov 23 04:00:41 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 04:00:41 localhost podman[107095]: Error: container 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d is not running Nov 23 04:00:41 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Main process exited, code=exited, status=125/n/a Nov 23 04:00:41 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Failed with result 'exit-code'. Nov 23 04:00:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 04:00:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48990 DF PROTO=TCP SPT=52178 DPT=9105 SEQ=1210821248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5F79100000000001030307) Nov 23 04:00:43 localhost podman[107124]: 2025-11-23 09:00:43.873195527 +0000 UTC m=+0.067253091 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, tcib_managed=true, config_id=tripleo_step4, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12) Nov 23 04:00:44 localhost podman[107124]: 2025-11-23 09:00:44.227739133 +0000 UTC m=+0.421796667 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=nova_migration_target, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 04:00:44 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 04:00:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 04:00:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 04:00:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 04:00:46 localhost podman[107148]: 2025-11-23 09:00:46.151471458 +0000 UTC m=+0.085437512 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, release=1761123044, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12) Nov 23 04:00:46 localhost podman[107150]: 2025-11-23 09:00:46.219043208 +0000 UTC m=+0.144383917 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com) Nov 23 04:00:46 localhost podman[107150]: 2025-11-23 09:00:46.236595968 +0000 UTC m=+0.161936717 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 04:00:46 localhost podman[107150]: unhealthy Nov 23 04:00:46 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:00:46 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 04:00:46 localhost podman[107149]: 2025-11-23 09:00:46.195825654 +0000 UTC m=+0.126507827 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, container_name=ovn_controller, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1761123044, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4) Nov 23 04:00:46 localhost podman[107149]: 2025-11-23 09:00:46.281406148 +0000 UTC m=+0.212088311 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 23 04:00:46 localhost podman[107149]: unhealthy Nov 23 04:00:46 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:00:46 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 04:00:46 localhost podman[107148]: 2025-11-23 09:00:46.36947862 +0000 UTC m=+0.303444654 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Nov 23 04:00:46 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 04:00:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51277 DF PROTO=TCP SPT=56110 DPT=9882 SEQ=3240422757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5F85C30000000001030307) Nov 23 04:00:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26662 DF PROTO=TCP SPT=50880 DPT=9102 SEQ=2688875489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5F8EA30000000001030307) Nov 23 04:00:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59910 DF PROTO=TCP SPT=42860 DPT=9101 SEQ=2706510319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5F9A0F0000000001030307) Nov 23 04:00:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26665 DF PROTO=TCP SPT=50880 DPT=9102 SEQ=2688875489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5FAA4F0000000001030307) Nov 23 04:00:59 localhost podman[106937]: time="2025-11-23T09:00:59Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL" Nov 23 04:00:59 localhost systemd[1]: libpod-21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.scope: Deactivated successfully. Nov 23 04:00:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:5b:6f:0e MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=37212 SEQ=3532432091 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Nov 23 04:00:59 localhost systemd[1]: libpod-21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.scope: Consumed 6.689s CPU time. Nov 23 04:00:59 localhost podman[106937]: 2025-11-23 09:00:59.485920958 +0000 UTC m=+42.101222936 container died 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Nov 23 04:00:59 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.timer: Deactivated successfully. Nov 23 04:00:59 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d. Nov 23 04:00:59 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Failed to open /run/systemd/transient/21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: No such file or directory Nov 23 04:00:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d-userdata-shm.mount: Deactivated successfully. Nov 23 04:00:59 localhost systemd[1]: var-lib-containers-storage-overlay-3a88026e8435ee6e4a9cdaa4ab5e7c8d8b76dc6fc1517ed344c4771e775bf72d-merged.mount: Deactivated successfully. Nov 23 04:00:59 localhost podman[106937]: 2025-11-23 09:00:59.547901496 +0000 UTC m=+42.163203444 container cleanup 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git) Nov 23 04:00:59 localhost podman[106937]: ceilometer_agent_ipmi Nov 23 04:00:59 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.timer: Failed to open /run/systemd/transient/21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.timer: No such file or directory Nov 23 04:00:59 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Failed to open /run/systemd/transient/21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: No such file or directory Nov 23 04:00:59 localhost podman[107218]: 2025-11-23 09:00:59.579588702 +0000 UTC m=+0.077900070 container cleanup 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com) Nov 23 04:00:59 localhost systemd[1]: libpod-conmon-21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.scope: Deactivated successfully. Nov 23 04:00:59 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.timer: Failed to open /run/systemd/transient/21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.timer: No such file or directory Nov 23 04:00:59 localhost systemd[1]: 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: Failed to open /run/systemd/transient/21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d.service: No such file or directory Nov 23 04:00:59 localhost podman[107234]: 2025-11-23 09:00:59.683346916 +0000 UTC m=+0.073119202 container cleanup 21b6a6db7be46537de5da140341578b07b48221ca1381a65d202f146bd272f2d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, tcib_managed=true, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1) Nov 23 04:00:59 localhost podman[107234]: ceilometer_agent_ipmi Nov 23 04:00:59 localhost systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully. Nov 23 04:00:59 localhost systemd[1]: Stopped ceilometer_agent_ipmi container. Nov 23 04:01:00 localhost python3.9[107337]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:01:00 localhost systemd[1]: Reloading. Nov 23 04:01:00 localhost systemd-sysv-generator[107370]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:01:00 localhost systemd-rc-local-generator[107366]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:01:00 localhost sshd[107376]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:01:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:01:00 localhost systemd[1]: Stopping collectd container... Nov 23 04:01:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:5b:6f:0e MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=37212 SEQ=3532432091 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Nov 23 04:01:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43492 DF PROTO=TCP SPT=58914 DPT=9100 SEQ=165449635 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5FC80F0000000001030307) Nov 23 04:01:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 04:01:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 04:01:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 04:01:07 localhost podman[107419]: 2025-11-23 09:01:07.172749148 +0000 UTC m=+0.097944876 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-nova-compute, version=17.1.12, container_name=nova_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step5) Nov 23 04:01:07 localhost podman[107418]: Error: container 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc is not running Nov 23 04:01:07 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Main process exited, code=exited, status=125/n/a Nov 23 04:01:07 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Failed with result 'exit-code'. Nov 23 04:01:07 localhost podman[107419]: 2025-11-23 09:01:07.258376634 +0000 UTC m=+0.183572362 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1) Nov 23 04:01:07 localhost podman[107419]: unhealthy Nov 23 04:01:07 localhost podman[107447]: 2025-11-23 09:01:07.271398995 +0000 UTC m=+0.086230236 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 04:01:07 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:01:07 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed with result 'exit-code'. Nov 23 04:01:07 localhost podman[107447]: 2025-11-23 09:01:07.28226497 +0000 UTC m=+0.097096121 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, architecture=x86_64, container_name=iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 23 04:01:07 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 04:01:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31999 DF PROTO=TCP SPT=39204 DPT=9100 SEQ=3346742454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5FD4D00000000001030307) Nov 23 04:01:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48731 DF PROTO=TCP SPT=59166 DPT=9105 SEQ=3352086361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5FDE8F0000000001030307) Nov 23 04:01:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 04:01:12 localhost podman[107470]: 2025-11-23 09:01:12.392616068 +0000 UTC m=+0.079074085 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-cron, io.openshift.expose-services=, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 04:01:12 localhost podman[107470]: 2025-11-23 09:01:12.40080079 +0000 UTC m=+0.087258787 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1) Nov 23 04:01:12 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 04:01:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48732 DF PROTO=TCP SPT=59166 DPT=9105 SEQ=3352086361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5FEE4F0000000001030307) Nov 23 04:01:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 04:01:14 localhost podman[107489]: 2025-11-23 09:01:14.636826199 +0000 UTC m=+0.074292449 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=nova_migration_target, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z) Nov 23 04:01:15 localhost podman[107489]: 2025-11-23 09:01:15.009477301 +0000 UTC m=+0.446943511 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044) Nov 23 04:01:15 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 04:01:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32000 DF PROTO=TCP SPT=39204 DPT=9100 SEQ=3346742454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B5FF60F0000000001030307) Nov 23 04:01:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 04:01:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 04:01:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 04:01:16 localhost podman[107512]: 2025-11-23 09:01:16.922068964 +0000 UTC m=+0.102906509 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 04:01:16 localhost podman[107513]: 2025-11-23 09:01:16.965503541 +0000 UTC m=+0.140734324 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true) Nov 23 04:01:17 localhost podman[107513]: 2025-11-23 09:01:17.010482205 +0000 UTC m=+0.185713048 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=ovn_controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z) Nov 23 04:01:17 localhost podman[107513]: unhealthy Nov 23 04:01:17 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:01:17 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 04:01:17 localhost podman[107514]: 2025-11-23 09:01:17.027742277 +0000 UTC m=+0.197470000 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z) Nov 23 04:01:17 localhost podman[107514]: 2025-11-23 09:01:17.047499655 +0000 UTC m=+0.217227418 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, vcs-type=git) Nov 23 04:01:17 localhost podman[107514]: unhealthy Nov 23 04:01:17 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:01:17 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 04:01:17 localhost podman[107512]: 2025-11-23 09:01:17.151493737 +0000 UTC m=+0.332331312 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 04:01:17 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Deactivated successfully. Nov 23 04:01:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51282 DF PROTO=TCP SPT=56110 DPT=9882 SEQ=3240422757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B60020F0000000001030307) Nov 23 04:01:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48733 DF PROTO=TCP SPT=59166 DPT=9105 SEQ=3352086361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B600E0F0000000001030307) Nov 23 04:01:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:5b:6f:0e MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=37212 SEQ=3532432091 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Nov 23 04:01:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36065 DF PROTO=TCP SPT=57554 DPT=9100 SEQ=2947091217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B602E1E0000000001030307) Nov 23 04:01:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36066 DF PROTO=TCP SPT=57554 DPT=9100 SEQ=2947091217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B60320F0000000001030307) Nov 23 04:01:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1282 DF PROTO=TCP SPT=58368 DPT=9100 SEQ=481273203 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B603E0F0000000001030307) Nov 23 04:01:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36068 DF PROTO=TCP SPT=57554 DPT=9100 SEQ=2947091217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6049CF0000000001030307) Nov 23 04:01:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 04:01:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 04:01:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 04:01:37 localhost podman[107583]: Error: container 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc is not running Nov 23 04:01:37 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Main process exited, code=exited, status=125/n/a Nov 23 04:01:37 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Failed with result 'exit-code'. Nov 23 04:01:37 localhost podman[107584]: 2025-11-23 09:01:37.704421617 +0000 UTC m=+0.136763782 container health_status a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z) Nov 23 04:01:37 localhost podman[107584]: 2025-11-23 09:01:37.716256782 +0000 UTC m=+0.148598937 container exec_died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, release=1761123044, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1) Nov 23 04:01:37 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Deactivated successfully. Nov 23 04:01:37 localhost podman[107585]: 2025-11-23 09:01:37.80263075 +0000 UTC m=+0.230698873 container health_status bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Nov 23 04:01:37 localhost podman[107585]: 2025-11-23 09:01:37.852411803 +0000 UTC m=+0.280479916 container exec_died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, tcib_managed=true, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, container_name=nova_compute, io.openshift.expose-services=, version=17.1.12) Nov 23 04:01:37 localhost podman[107585]: unhealthy Nov 23 04:01:37 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:01:37 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed with result 'exit-code'. Nov 23 04:01:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63355 DF PROTO=TCP SPT=40920 DPT=9105 SEQ=2365801846 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6053CF0000000001030307) Nov 23 04:01:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 04:01:42 localhost podman[107700]: 2025-11-23 09:01:42.905801038 +0000 UTC m=+0.085330757 container health_status c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, container_name=logrotate_crond, architecture=x86_64, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron) Nov 23 04:01:42 localhost podman[107700]: 2025-11-23 09:01:42.920374727 +0000 UTC m=+0.099904456 container exec_died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Nov 23 04:01:42 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Deactivated successfully. Nov 23 04:01:43 localhost podman[107380]: time="2025-11-23T09:01:43Z" level=warning msg="StopSignal SIGTERM failed to stop container collectd in 42 seconds, resorting to SIGKILL" Nov 23 04:01:43 localhost podman[107380]: 2025-11-23 09:01:43.053177016 +0000 UTC m=+42.084341915 container stop 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, container_name=collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 04:01:43 localhost systemd[1]: libpod-82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.scope: Deactivated successfully. Nov 23 04:01:43 localhost systemd[1]: libpod-82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.scope: Consumed 2.161s CPU time. Nov 23 04:01:43 localhost podman[107380]: 2025-11-23 09:01:43.087344358 +0000 UTC m=+42.118509257 container died 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, container_name=collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-collectd-container) Nov 23 04:01:43 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.timer: Deactivated successfully. Nov 23 04:01:43 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc. Nov 23 04:01:43 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Failed to open /run/systemd/transient/82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: No such file or directory Nov 23 04:01:43 localhost podman[107380]: 2025-11-23 09:01:43.185592162 +0000 UTC m=+42.216757051 container cleanup 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, container_name=collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Nov 23 04:01:43 localhost podman[107380]: collectd Nov 23 04:01:43 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.timer: Failed to open /run/systemd/transient/82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.timer: No such file or directory Nov 23 04:01:43 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Failed to open /run/systemd/transient/82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: No such file or directory Nov 23 04:01:43 localhost podman[107726]: 2025-11-23 09:01:43.210126908 +0000 UTC m=+0.141913991 container cleanup 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, container_name=collectd, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git) Nov 23 04:01:43 localhost systemd[1]: libpod-conmon-82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.scope: Deactivated successfully. Nov 23 04:01:43 localhost podman[107761]: error opening file `/run/crun/82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc/status`: No such file or directory Nov 23 04:01:43 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.timer: Failed to open /run/systemd/transient/82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.timer: No such file or directory Nov 23 04:01:43 localhost systemd[1]: 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: Failed to open /run/systemd/transient/82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc.service: No such file or directory Nov 23 04:01:43 localhost podman[107750]: 2025-11-23 09:01:43.329276145 +0000 UTC m=+0.086961058 container cleanup 82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=collectd, name=rhosp17/openstack-collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Nov 23 04:01:43 localhost podman[107750]: collectd Nov 23 04:01:43 localhost systemd[1]: tripleo_collectd.service: Deactivated successfully. Nov 23 04:01:43 localhost systemd[1]: Stopped collectd container. Nov 23 04:01:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63356 DF PROTO=TCP SPT=40920 DPT=9105 SEQ=2365801846 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6063900000000001030307) Nov 23 04:01:43 localhost systemd[1]: var-lib-containers-storage-overlay-8562d6b9d1c21090ac473dd301ec4770b1bd3ad140de381119a1325c7a15d2fa-merged.mount: Deactivated successfully. Nov 23 04:01:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-82704bc9324f7759b00ce4a760e79c19101683d6fceae8cf263ee00107cd70bc-userdata-shm.mount: Deactivated successfully. Nov 23 04:01:44 localhost python3.9[107856]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:01:44 localhost systemd[1]: Reloading. Nov 23 04:01:44 localhost systemd-rc-local-generator[107880]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:01:44 localhost systemd-sysv-generator[107885]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:01:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:01:44 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 04:01:44 localhost systemd[1]: Stopping iscsid container... Nov 23 04:01:44 localhost recover_tripleo_nova_virtqemud[107897]: 62093 Nov 23 04:01:44 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 04:01:44 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 04:01:44 localhost systemd[1]: libpod-a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.scope: Deactivated successfully. Nov 23 04:01:44 localhost systemd[1]: libpod-a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.scope: Consumed 1.155s CPU time. Nov 23 04:01:44 localhost podman[107899]: 2025-11-23 09:01:44.638236383 +0000 UTC m=+0.082595743 container died a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 23 04:01:44 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.timer: Deactivated successfully. Nov 23 04:01:44 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a. Nov 23 04:01:44 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Failed to open /run/systemd/transient/a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: No such file or directory Nov 23 04:01:44 localhost systemd[1]: tmp-crun.FqGlFp.mount: Deactivated successfully. Nov 23 04:01:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a-userdata-shm.mount: Deactivated successfully. Nov 23 04:01:44 localhost podman[107899]: 2025-11-23 09:01:44.68651908 +0000 UTC m=+0.130878420 container cleanup a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container) Nov 23 04:01:44 localhost podman[107899]: iscsid Nov 23 04:01:44 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.timer: Failed to open /run/systemd/transient/a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.timer: No such file or directory Nov 23 04:01:44 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Failed to open /run/systemd/transient/a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: No such file or directory Nov 23 04:01:44 localhost podman[107912]: 2025-11-23 09:01:44.724102027 +0000 UTC m=+0.073124482 container cleanup a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, version=17.1.12, distribution-scope=public, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 04:01:44 localhost systemd[1]: libpod-conmon-a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.scope: Deactivated successfully. Nov 23 04:01:44 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.timer: Failed to open /run/systemd/transient/a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.timer: No such file or directory Nov 23 04:01:44 localhost systemd[1]: a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: Failed to open /run/systemd/transient/a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a.service: No such file or directory Nov 23 04:01:44 localhost podman[107926]: 2025-11-23 09:01:44.830598876 +0000 UTC m=+0.071775551 container cleanup a36875cf25112fdeab78bbe4d6d566c848b08329c0ef8b1fc2aea3179b48e63a (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, version=17.1.12) Nov 23 04:01:44 localhost podman[107926]: iscsid Nov 23 04:01:44 localhost systemd[1]: tripleo_iscsid.service: Deactivated successfully. Nov 23 04:01:44 localhost systemd[1]: Stopped iscsid container. Nov 23 04:01:44 localhost systemd[1]: var-lib-containers-storage-overlay-575692a885e4e5d5a8b1e76315957cc96af13a896db846450cad3752e5067ba2-merged.mount: Deactivated successfully. Nov 23 04:01:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 04:01:45 localhost podman[108029]: 2025-11-23 09:01:45.414875184 +0000 UTC m=+0.091537690 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z) Nov 23 04:01:45 localhost python3.9[108028]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:01:45 localhost systemd[1]: Reloading. Nov 23 04:01:45 localhost systemd-rc-local-generator[108076]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:01:45 localhost systemd-sysv-generator[108081]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:01:45 localhost podman[108029]: 2025-11-23 09:01:45.79349468 +0000 UTC m=+0.470157166 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1761123044, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Nov 23 04:01:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:01:45 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 04:01:45 localhost systemd[1]: Stopping logrotate_crond container... Nov 23 04:01:46 localhost systemd[1]: libpod-c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.scope: Deactivated successfully. Nov 23 04:01:46 localhost systemd[1]: libpod-c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.scope: Consumed 1.072s CPU time. Nov 23 04:01:46 localhost podman[108090]: 2025-11-23 09:01:46.083833478 +0000 UTC m=+0.077800626 container died c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Nov 23 04:01:46 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.timer: Deactivated successfully. Nov 23 04:01:46 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5. Nov 23 04:01:46 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Failed to open /run/systemd/transient/c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: No such file or directory Nov 23 04:01:46 localhost podman[108090]: 2025-11-23 09:01:46.138881923 +0000 UTC m=+0.132849071 container cleanup c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, managed_by=tripleo_ansible, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, name=rhosp17/openstack-cron, distribution-scope=public, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4) Nov 23 04:01:46 localhost podman[108090]: logrotate_crond Nov 23 04:01:46 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.timer: Failed to open /run/systemd/transient/c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.timer: No such file or directory Nov 23 04:01:46 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Failed to open /run/systemd/transient/c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: No such file or directory Nov 23 04:01:46 localhost podman[108102]: 2025-11-23 09:01:46.192126102 +0000 UTC m=+0.095952115 container cleanup c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=logrotate_crond, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, release=1761123044) Nov 23 04:01:46 localhost systemd[1]: libpod-conmon-c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.scope: Deactivated successfully. Nov 23 04:01:46 localhost podman[108134]: error opening file `/run/crun/c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5/status`: No such file or directory Nov 23 04:01:46 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.timer: Failed to open /run/systemd/transient/c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.timer: No such file or directory Nov 23 04:01:46 localhost systemd[1]: c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: Failed to open /run/systemd/transient/c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5.service: No such file or directory Nov 23 04:01:46 localhost podman[108123]: 2025-11-23 09:01:46.30054939 +0000 UTC m=+0.075176475 container cleanup c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, version=17.1.12, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 04:01:46 localhost podman[108123]: logrotate_crond Nov 23 04:01:46 localhost systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully. Nov 23 04:01:46 localhost systemd[1]: Stopped logrotate_crond container. Nov 23 04:01:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 04:01:47 localhost python3.9[108229]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:01:47 localhost systemd[1]: var-lib-containers-storage-overlay-1c555bb6d05f3d1ef69b807da8d7b417226dccb2e4af3d5892e31108d455684e-merged.mount: Deactivated successfully. Nov 23 04:01:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c5a1c5188735cd6ad30ba6593d8a56756e4c53c326ca2c5208f8f920df85d4c5-userdata-shm.mount: Deactivated successfully. Nov 23 04:01:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28962 DF PROTO=TCP SPT=52196 DPT=9882 SEQ=2223011036 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6070230000000001030307) Nov 23 04:01:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 04:01:47 localhost systemd[1]: Reloading. Nov 23 04:01:47 localhost podman[108230]: 2025-11-23 09:01:47.164121706 +0000 UTC m=+0.090993712 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, release=1761123044, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1) Nov 23 04:01:47 localhost podman[108230]: 2025-11-23 09:01:47.214554989 +0000 UTC m=+0.141426955 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, container_name=ovn_controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Nov 23 04:01:47 localhost podman[108230]: unhealthy Nov 23 04:01:47 localhost systemd-rc-local-generator[108294]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:01:47 localhost podman[108237]: 2025-11-23 09:01:47.218111448 +0000 UTC m=+0.127884787 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 23 04:01:47 localhost systemd-sysv-generator[108299]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:01:47 localhost podman[108237]: 2025-11-23 09:01:47.303538589 +0000 UTC m=+0.213311908 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Nov 23 04:01:47 localhost podman[108237]: unhealthy Nov 23 04:01:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:01:47 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:01:47 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 04:01:47 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:01:47 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 04:01:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 04:01:47 localhost systemd[1]: Stopping metrics_qdr container... Nov 23 04:01:47 localhost podman[108310]: 2025-11-23 09:01:47.594271689 +0000 UTC m=+0.088013491 container health_status 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T22:49:46Z) Nov 23 04:01:47 localhost kernel: qdrouterd[54809]: segfault at 0 ip 00007fe8cc19f7cb sp 00007fff103dacd0 error 4 in libc.so.6[7fe8cc13c000+175000] Nov 23 04:01:47 localhost kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9 Nov 23 04:01:47 localhost systemd[1]: Created slice Slice /system/systemd-coredump. Nov 23 04:01:47 localhost systemd[1]: Started Process Core Dump (PID 108351/UID 0). Nov 23 04:01:47 localhost podman[108310]: 2025-11-23 09:01:47.752604043 +0000 UTC m=+0.246345835 container exec_died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 04:01:47 localhost podman[108310]: unhealthy Nov 23 04:01:47 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:01:47 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Failed with result 'exit-code'. Nov 23 04:01:47 localhost systemd-coredump[108352]: Resource limits disable core dumping for process 54809 (qdrouterd). Nov 23 04:01:47 localhost systemd-coredump[108352]: Process 54809 (qdrouterd) of user 42465 dumped core. Nov 23 04:01:47 localhost systemd[1]: systemd-coredump@0-108351-0.service: Deactivated successfully. Nov 23 04:01:47 localhost systemd[1]: libpod-2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.scope: Deactivated successfully. Nov 23 04:01:47 localhost systemd[1]: libpod-2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.scope: Consumed 29.136s CPU time. Nov 23 04:01:47 localhost podman[108311]: 2025-11-23 09:01:47.795215635 +0000 UTC m=+0.282242439 container died 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Nov 23 04:01:47 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.timer: Deactivated successfully. Nov 23 04:01:47 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893. Nov 23 04:01:47 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Failed to open /run/systemd/transient/2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: No such file or directory Nov 23 04:01:47 localhost podman[108311]: 2025-11-23 09:01:47.838747216 +0000 UTC m=+0.325774060 container cleanup 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 23 04:01:47 localhost podman[108311]: metrics_qdr Nov 23 04:01:47 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.timer: Failed to open /run/systemd/transient/2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.timer: No such file or directory Nov 23 04:01:47 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Failed to open /run/systemd/transient/2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: No such file or directory Nov 23 04:01:47 localhost podman[108354]: 2025-11-23 09:01:47.870671678 +0000 UTC m=+0.063320150 container cleanup 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, build-date=2025-11-18T22:49:46Z) Nov 23 04:01:47 localhost systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a Nov 23 04:01:47 localhost systemd[1]: libpod-conmon-2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.scope: Deactivated successfully. Nov 23 04:01:47 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.timer: Failed to open /run/systemd/transient/2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.timer: No such file or directory Nov 23 04:01:47 localhost systemd[1]: 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: Failed to open /run/systemd/transient/2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893.service: No such file or directory Nov 23 04:01:47 localhost podman[108370]: 2025-11-23 09:01:47.975333431 +0000 UTC m=+0.073528426 container cleanup 2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4786e46dc7f8a50dc71419c2225b2915'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., container_name=metrics_qdr, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 23 04:01:47 localhost podman[108370]: metrics_qdr Nov 23 04:01:47 localhost systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'. Nov 23 04:01:47 localhost systemd[1]: Stopped metrics_qdr container. Nov 23 04:01:48 localhost systemd[1]: var-lib-containers-storage-overlay-ab41077e04905cd2ed47da0e447cf096133dba9a29e9494f8fcc86ce48952daa-merged.mount: Deactivated successfully. Nov 23 04:01:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893-userdata-shm.mount: Deactivated successfully. Nov 23 04:01:48 localhost python3.9[108474]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:01:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15952 DF PROTO=TCP SPT=41696 DPT=9882 SEQ=2845863647 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6078100000000001030307) Nov 23 04:01:50 localhost python3.9[108567]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:01:51 localhost python3.9[108660]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:01:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:5b:6f:0e MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.106 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=37212 SEQ=3532432091 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Nov 23 04:01:52 localhost python3.9[108753]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:01:52 localhost systemd[1]: Reloading. Nov 23 04:01:52 localhost systemd-sysv-generator[108786]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:01:52 localhost systemd-rc-local-generator[108783]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:01:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:01:52 localhost systemd[1]: Stopping nova_compute container... Nov 23 04:01:52 localhost systemd[1]: tmp-crun.yqgi1w.mount: Deactivated successfully. Nov 23 04:01:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2376 DF PROTO=TCP SPT=55952 DPT=9101 SEQ=3832928116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B608E0F0000000001030307) Nov 23 04:02:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35974 DF PROTO=TCP SPT=43196 DPT=9100 SEQ=291516519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B60A34E0000000001030307) Nov 23 04:02:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35975 DF PROTO=TCP SPT=43196 DPT=9100 SEQ=291516519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B60A7500000000001030307) Nov 23 04:02:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 04:02:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 5370 writes, 23K keys, 5370 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5370 writes, 735 syncs, 7.31 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 04:02:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32002 DF PROTO=TCP SPT=39204 DPT=9100 SEQ=3346742454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B60B4100000000001030307) Nov 23 04:02:05 localhost sshd[108809]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:02:05 localhost sshd[108810]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:02:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 04:02:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 5205 writes, 23K keys, 5205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5205 writes, 665 syncs, 7.83 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 04:02:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35977 DF PROTO=TCP SPT=43196 DPT=9100 SEQ=291516519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B60BF0F0000000001030307) Nov 23 04:02:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 04:02:08 localhost podman[108811]: Error: container bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 is not running Nov 23 04:02:08 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Main process exited, code=exited, status=125/n/a Nov 23 04:02:08 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed with result 'exit-code'. Nov 23 04:02:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18746 DF PROTO=TCP SPT=53462 DPT=9105 SEQ=4013185026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B60C8D00000000001030307) Nov 23 04:02:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18747 DF PROTO=TCP SPT=53462 DPT=9105 SEQ=4013185026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B60D88F0000000001030307) Nov 23 04:02:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35978 DF PROTO=TCP SPT=43196 DPT=9100 SEQ=291516519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B60E00F0000000001030307) Nov 23 04:02:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 04:02:16 localhost podman[108824]: 2025-11-23 09:02:16.155679542 +0000 UTC m=+0.091822108 container health_status c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12) Nov 23 04:02:16 localhost podman[108824]: 2025-11-23 09:02:16.545627127 +0000 UTC m=+0.481769673 container exec_died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-type=git) Nov 23 04:02:16 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Deactivated successfully. Nov 23 04:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 04:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 04:02:17 localhost podman[108847]: 2025-11-23 09:02:17.905613617 +0000 UTC m=+0.085020060 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com) Nov 23 04:02:17 localhost podman[108847]: 2025-11-23 09:02:17.919233856 +0000 UTC m=+0.098640349 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, container_name=ovn_controller, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 04:02:17 localhost podman[108847]: unhealthy Nov 23 04:02:17 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:02:17 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 04:02:18 localhost podman[108848]: 2025-11-23 09:02:18.012381293 +0000 UTC m=+0.189561447 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible) Nov 23 04:02:18 localhost podman[108848]: 2025-11-23 09:02:18.054130679 +0000 UTC m=+0.231310853 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 23 04:02:18 localhost podman[108848]: unhealthy Nov 23 04:02:18 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:02:18 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 04:02:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28967 DF PROTO=TCP SPT=52196 DPT=9882 SEQ=2223011036 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B60EC100000000001030307) Nov 23 04:02:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18748 DF PROTO=TCP SPT=53462 DPT=9105 SEQ=4013185026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B60F8100000000001030307) Nov 23 04:02:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28126 DF PROTO=TCP SPT=34338 DPT=9102 SEQ=3059584192 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B610A100000000001030307) Nov 23 04:02:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29298 DF PROTO=TCP SPT=35284 DPT=9100 SEQ=1232478828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B61187F0000000001030307) Nov 23 04:02:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29299 DF PROTO=TCP SPT=35284 DPT=9100 SEQ=1232478828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B611C8F0000000001030307) Nov 23 04:02:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36071 DF PROTO=TCP SPT=57554 DPT=9100 SEQ=2947091217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B61280F0000000001030307) Nov 23 04:02:34 localhost podman[108795]: time="2025-11-23T09:02:34Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL" Nov 23 04:02:34 localhost systemd[1]: libpod-bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.scope: Deactivated successfully. Nov 23 04:02:34 localhost systemd[1]: libpod-bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.scope: Consumed 28.469s CPU time. Nov 23 04:02:34 localhost podman[108795]: 2025-11-23 09:02:34.694948686 +0000 UTC m=+42.106102442 container died bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, version=17.1.12, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 04:02:34 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.timer: Deactivated successfully. Nov 23 04:02:34 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5. Nov 23 04:02:34 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed to open /run/systemd/transient/bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: No such file or directory Nov 23 04:02:34 localhost systemd[1]: var-lib-containers-storage-overlay-0d4c91e21a4f422a0f3d35f00c131b332e5afd08cf0cad9281d59f1b3acbd4cf-merged.mount: Deactivated successfully. Nov 23 04:02:34 localhost podman[108795]: 2025-11-23 09:02:34.765101936 +0000 UTC m=+42.176255632 container cleanup bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-type=git, config_id=tripleo_step5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 23 04:02:34 localhost podman[108795]: nova_compute Nov 23 04:02:34 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.timer: Failed to open /run/systemd/transient/bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.timer: No such file or directory Nov 23 04:02:34 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed to open /run/systemd/transient/bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: No such file or directory Nov 23 04:02:34 localhost podman[108891]: 2025-11-23 09:02:34.787460664 +0000 UTC m=+0.085225845 container cleanup bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vcs-type=git) Nov 23 04:02:34 localhost systemd[1]: libpod-conmon-bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.scope: Deactivated successfully. Nov 23 04:02:34 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.timer: Failed to open /run/systemd/transient/bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.timer: No such file or directory Nov 23 04:02:34 localhost systemd[1]: bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: Failed to open /run/systemd/transient/bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5.service: No such file or directory Nov 23 04:02:34 localhost podman[108904]: 2025-11-23 09:02:34.889144304 +0000 UTC m=+0.070520722 container cleanup bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, architecture=x86_64) Nov 23 04:02:34 localhost podman[108904]: nova_compute Nov 23 04:02:34 localhost systemd[1]: tripleo_nova_compute.service: Deactivated successfully. Nov 23 04:02:34 localhost systemd[1]: Stopped nova_compute container. Nov 23 04:02:34 localhost systemd[1]: tripleo_nova_compute.service: Consumed 1.195s CPU time, no IO. Nov 23 04:02:35 localhost python3.9[109007]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:02:35 localhost systemd[1]: Reloading. Nov 23 04:02:35 localhost systemd-rc-local-generator[109033]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:02:35 localhost systemd-sysv-generator[109039]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:02:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:02:36 localhost systemd[1]: Stopping nova_migration_target container... Nov 23 04:02:36 localhost systemd[1]: libpod-c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.scope: Deactivated successfully. Nov 23 04:02:36 localhost systemd[1]: libpod-c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.scope: Consumed 35.248s CPU time. Nov 23 04:02:36 localhost podman[109048]: 2025-11-23 09:02:36.155398307 +0000 UTC m=+0.072178913 container died c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64) Nov 23 04:02:36 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.timer: Deactivated successfully. Nov 23 04:02:36 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30. Nov 23 04:02:36 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Failed to open /run/systemd/transient/c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: No such file or directory Nov 23 04:02:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30-userdata-shm.mount: Deactivated successfully. Nov 23 04:02:36 localhost systemd[1]: var-lib-containers-storage-overlay-11e303e2a487b3de65e20e02c06253184ba4537ed64f53b2bdbdf3a08756ea60-merged.mount: Deactivated successfully. Nov 23 04:02:36 localhost podman[109048]: 2025-11-23 09:02:36.206697127 +0000 UTC m=+0.123477653 container cleanup c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container) Nov 23 04:02:36 localhost podman[109048]: nova_migration_target Nov 23 04:02:36 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.timer: Failed to open /run/systemd/transient/c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.timer: No such file or directory Nov 23 04:02:36 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Failed to open /run/systemd/transient/c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: No such file or directory Nov 23 04:02:36 localhost podman[109061]: 2025-11-23 09:02:36.246514113 +0000 UTC m=+0.075980630 container cleanup c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, container_name=nova_migration_target, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64) Nov 23 04:02:36 localhost systemd[1]: libpod-conmon-c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.scope: Deactivated successfully. Nov 23 04:02:36 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.timer: Failed to open /run/systemd/transient/c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.timer: No such file or directory Nov 23 04:02:36 localhost systemd[1]: c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: Failed to open /run/systemd/transient/c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30.service: No such file or directory Nov 23 04:02:36 localhost podman[109074]: 2025-11-23 09:02:36.353364992 +0000 UTC m=+0.075651430 container cleanup c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 04:02:36 localhost podman[109074]: nova_migration_target Nov 23 04:02:36 localhost systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully. Nov 23 04:02:36 localhost systemd[1]: Stopped nova_migration_target container. Nov 23 04:02:37 localhost python3.9[109179]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:02:37 localhost systemd[1]: Reloading. Nov 23 04:02:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29301 DF PROTO=TCP SPT=35284 DPT=9100 SEQ=1232478828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B61344F0000000001030307) Nov 23 04:02:37 localhost systemd-rc-local-generator[109204]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:02:37 localhost systemd-sysv-generator[109211]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:02:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:02:37 localhost systemd[1]: Stopping nova_virtlogd_wrapper container... Nov 23 04:02:37 localhost systemd[1]: libpod-376631d231b7a9b765ed159b4aff592505225e967a832b4c78eaa04286699926.scope: Deactivated successfully. Nov 23 04:02:37 localhost podman[109220]: 2025-11-23 09:02:37.685594286 +0000 UTC m=+0.081011494 container died 376631d231b7a9b765ed159b4aff592505225e967a832b4c78eaa04286699926 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=nova_virtlogd_wrapper, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 23 04:02:37 localhost systemd[1]: tmp-crun.QOFGbq.mount: Deactivated successfully. Nov 23 04:02:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-376631d231b7a9b765ed159b4aff592505225e967a832b4c78eaa04286699926-userdata-shm.mount: Deactivated successfully. Nov 23 04:02:37 localhost podman[109220]: 2025-11-23 09:02:37.733603585 +0000 UTC m=+0.129020753 container cleanup 376631d231b7a9b765ed159b4aff592505225e967a832b4c78eaa04286699926 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=nova_virtlogd_wrapper, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, batch=17.1_20251118.1) Nov 23 04:02:37 localhost podman[109220]: nova_virtlogd_wrapper Nov 23 04:02:37 localhost podman[109232]: 2025-11-23 09:02:37.761738511 +0000 UTC m=+0.072066630 container cleanup 376631d231b7a9b765ed159b4aff592505225e967a832b4c78eaa04286699926 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtlogd_wrapper, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, config_id=tripleo_step3, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, release=1761123044, vendor=Red Hat, Inc.) Nov 23 04:02:38 localhost systemd[1]: var-lib-containers-storage-overlay-6c7788a714f783bcd626d46acd079248804e4957a18f46d9597110fed4f235e9-merged.mount: Deactivated successfully. Nov 23 04:02:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3192 DF PROTO=TCP SPT=58218 DPT=9105 SEQ=1271984325 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B613E0F0000000001030307) Nov 23 04:02:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3193 DF PROTO=TCP SPT=58218 DPT=9105 SEQ=1271984325 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B614DCF0000000001030307) Nov 23 04:02:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44614 DF PROTO=TCP SPT=52966 DPT=9882 SEQ=2178160848 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B615A830000000001030307) Nov 23 04:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 04:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 04:02:48 localhost podman[109324]: 2025-11-23 09:02:48.40832649 +0000 UTC m=+0.090369443 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, io.buildah.version=1.41.4) Nov 23 04:02:48 localhost podman[109324]: 2025-11-23 09:02:48.456440971 +0000 UTC m=+0.138483944 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Nov 23 04:02:48 localhost podman[109324]: unhealthy Nov 23 04:02:48 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:02:48 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 04:02:48 localhost podman[109323]: 2025-11-23 09:02:48.458922937 +0000 UTC m=+0.143374854 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4) Nov 23 04:02:48 localhost podman[109323]: 2025-11-23 09:02:48.543566923 +0000 UTC m=+0.228018840 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc.) Nov 23 04:02:48 localhost podman[109323]: unhealthy Nov 23 04:02:48 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:02:48 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 04:02:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41242 DF PROTO=TCP SPT=47268 DPT=9882 SEQ=1464327635 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B61620F0000000001030307) Nov 23 04:02:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3194 DF PROTO=TCP SPT=58218 DPT=9105 SEQ=1271984325 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B616E100000000001030307) Nov 23 04:02:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10899 DF PROTO=TCP SPT=44358 DPT=9102 SEQ=3620269698 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B617F0F0000000001030307) Nov 23 04:03:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60600 DF PROTO=TCP SPT=54086 DPT=9100 SEQ=2321272305 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B618DAD0000000001030307) Nov 23 04:03:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60601 DF PROTO=TCP SPT=54086 DPT=9100 SEQ=2321272305 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6191CF0000000001030307) Nov 23 04:03:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35980 DF PROTO=TCP SPT=43196 DPT=9100 SEQ=291516519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B619E0F0000000001030307) Nov 23 04:03:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60603 DF PROTO=TCP SPT=54086 DPT=9100 SEQ=2321272305 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B61A98F0000000001030307) Nov 23 04:03:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11704 DF PROTO=TCP SPT=42818 DPT=9105 SEQ=660478516 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B61B34F0000000001030307) Nov 23 04:03:11 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 23 04:03:11 localhost recover_tripleo_nova_virtqemud[109364]: 62093 Nov 23 04:03:11 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 23 04:03:11 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 23 04:03:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11705 DF PROTO=TCP SPT=42818 DPT=9105 SEQ=660478516 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B61C30F0000000001030307) Nov 23 04:03:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38587 DF PROTO=TCP SPT=36114 DPT=9882 SEQ=2968441828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B61CFB30000000001030307) Nov 23 04:03:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 04:03:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 04:03:18 localhost podman[109366]: 2025-11-23 09:03:18.930317942 +0000 UTC m=+0.081078448 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.buildah.version=1.41.4) Nov 23 04:03:18 localhost podman[109366]: 2025-11-23 09:03:18.946586333 +0000 UTC m=+0.097346899 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044) Nov 23 04:03:18 localhost podman[109366]: unhealthy Nov 23 04:03:18 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:03:18 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 04:03:19 localhost podman[109365]: 2025-11-23 09:03:19.023266694 +0000 UTC m=+0.175876516 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, container_name=ovn_controller, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 04:03:19 localhost podman[109365]: 2025-11-23 09:03:19.062109289 +0000 UTC m=+0.214719181 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=ovn_controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Nov 23 04:03:19 localhost podman[109365]: unhealthy Nov 23 04:03:19 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:03:19 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 04:03:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45854 DF PROTO=TCP SPT=48706 DPT=9102 SEQ=3970377030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B61D8940000000001030307) Nov 23 04:03:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16137 DF PROTO=TCP SPT=55260 DPT=9101 SEQ=3476708676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B61E40F0000000001030307) Nov 23 04:03:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45857 DF PROTO=TCP SPT=48706 DPT=9102 SEQ=3970377030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B61F44F0000000001030307) Nov 23 04:03:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27569 DF PROTO=TCP SPT=40280 DPT=9100 SEQ=1997505800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6202DE0000000001030307) Nov 23 04:03:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27570 DF PROTO=TCP SPT=40280 DPT=9100 SEQ=1997505800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6206CF0000000001030307) Nov 23 04:03:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29304 DF PROTO=TCP SPT=35284 DPT=9100 SEQ=1232478828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6212100000000001030307) Nov 23 04:03:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27572 DF PROTO=TCP SPT=40280 DPT=9100 SEQ=1997505800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B621E900000000001030307) Nov 23 04:03:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58765 DF PROTO=TCP SPT=56814 DPT=9105 SEQ=3855484907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B62288F0000000001030307) Nov 23 04:03:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58766 DF PROTO=TCP SPT=56814 DPT=9105 SEQ=3855484907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6238500000000001030307) Nov 23 04:03:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36057 DF PROTO=TCP SPT=59506 DPT=9882 SEQ=2937592407 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6244E30000000001030307) Nov 23 04:03:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38592 DF PROTO=TCP SPT=36114 DPT=9882 SEQ=2968441828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B624C100000000001030307) Nov 23 04:03:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 04:03:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 04:03:49 localhost podman[109483]: 2025-11-23 09:03:49.182717903 +0000 UTC m=+0.087065831 container health_status f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Nov 23 04:03:49 localhost podman[109484]: 2025-11-23 09:03:49.238614484 +0000 UTC m=+0.139369461 container health_status e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Nov 23 04:03:49 localhost podman[109484]: 2025-11-23 09:03:49.256521776 +0000 UTC m=+0.157276803 container exec_died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, tcib_managed=true, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 04:03:49 localhost podman[109483]: 2025-11-23 09:03:49.254281607 +0000 UTC m=+0.158629515 container exec_died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 23 04:03:49 localhost podman[109484]: unhealthy Nov 23 04:03:49 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:03:49 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed with result 'exit-code'. Nov 23 04:03:49 localhost podman[109483]: unhealthy Nov 23 04:03:49 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:03:49 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed with result 'exit-code'. Nov 23 04:03:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58767 DF PROTO=TCP SPT=56814 DPT=9105 SEQ=3855484907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B62580F0000000001030307) Nov 23 04:03:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47461 DF PROTO=TCP SPT=41538 DPT=9102 SEQ=431229134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B62698F0000000001030307) Nov 23 04:04:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27404 DF PROTO=TCP SPT=37044 DPT=9100 SEQ=2357792630 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B62780E0000000001030307) Nov 23 04:04:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27405 DF PROTO=TCP SPT=37044 DPT=9100 SEQ=2357792630 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B627C0F0000000001030307) Nov 23 04:04:01 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing. Nov 23 04:04:01 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 61304 (conmon) with signal SIGKILL. Nov 23 04:04:01 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL Nov 23 04:04:01 localhost systemd[1]: libpod-conmon-376631d231b7a9b765ed159b4aff592505225e967a832b4c78eaa04286699926.scope: Deactivated successfully. Nov 23 04:04:01 localhost podman[109534]: error opening file `/run/crun/376631d231b7a9b765ed159b4aff592505225e967a832b4c78eaa04286699926/status`: No such file or directory Nov 23 04:04:01 localhost podman[109523]: 2025-11-23 09:04:01.884342741 +0000 UTC m=+0.069701856 container cleanup 376631d231b7a9b765ed159b4aff592505225e967a832b4c78eaa04286699926 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, name=rhosp17/openstack-nova-libvirt, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtlogd_wrapper, config_id=tripleo_step3, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 23 04:04:01 localhost podman[109523]: nova_virtlogd_wrapper Nov 23 04:04:01 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'. Nov 23 04:04:01 localhost systemd[1]: Stopped nova_virtlogd_wrapper container. Nov 23 04:04:02 localhost python3.9[109628]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:04:02 localhost systemd[1]: Reloading. Nov 23 04:04:02 localhost systemd-rc-local-generator[109657]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:04:02 localhost systemd-sysv-generator[109660]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:04:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:04:03 localhost systemd[1]: Stopping nova_virtnodedevd container... Nov 23 04:04:03 localhost systemd[1]: libpod-5d7b7aabccc0d98ac1bbcd5c0ef60e8fb1d6ddf28e1fc5dff6aaa70f591e4ef2.scope: Deactivated successfully. Nov 23 04:04:03 localhost systemd[1]: libpod-5d7b7aabccc0d98ac1bbcd5c0ef60e8fb1d6ddf28e1fc5dff6aaa70f591e4ef2.scope: Consumed 1.486s CPU time. Nov 23 04:04:03 localhost podman[109669]: 2025-11-23 09:04:03.135026936 +0000 UTC m=+0.075980770 container died 5d7b7aabccc0d98ac1bbcd5c0ef60e8fb1d6ddf28e1fc5dff6aaa70f591e4ef2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtnodedevd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com) Nov 23 04:04:03 localhost systemd[1]: tmp-crun.r51YUk.mount: Deactivated successfully. Nov 23 04:04:03 localhost podman[109669]: 2025-11-23 09:04:03.179190686 +0000 UTC m=+0.120144470 container cleanup 5d7b7aabccc0d98ac1bbcd5c0ef60e8fb1d6ddf28e1fc5dff6aaa70f591e4ef2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_virtnodedevd, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 23 04:04:03 localhost podman[109669]: nova_virtnodedevd Nov 23 04:04:03 localhost podman[109684]: 2025-11-23 09:04:03.214560304 +0000 UTC m=+0.065672332 container cleanup 5d7b7aabccc0d98ac1bbcd5c0ef60e8fb1d6ddf28e1fc5dff6aaa70f591e4ef2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtnodedevd, vcs-type=git, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 23 04:04:03 localhost systemd[1]: libpod-conmon-5d7b7aabccc0d98ac1bbcd5c0ef60e8fb1d6ddf28e1fc5dff6aaa70f591e4ef2.scope: Deactivated successfully. Nov 23 04:04:03 localhost podman[109712]: error opening file `/run/crun/5d7b7aabccc0d98ac1bbcd5c0ef60e8fb1d6ddf28e1fc5dff6aaa70f591e4ef2/status`: No such file or directory Nov 23 04:04:03 localhost podman[109700]: 2025-11-23 09:04:03.311763977 +0000 UTC m=+0.067989214 container cleanup 5d7b7aabccc0d98ac1bbcd5c0ef60e8fb1d6ddf28e1fc5dff6aaa70f591e4ef2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtnodedevd, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container) Nov 23 04:04:03 localhost podman[109700]: nova_virtnodedevd Nov 23 04:04:03 localhost systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully. Nov 23 04:04:03 localhost systemd[1]: Stopped nova_virtnodedevd container. Nov 23 04:04:04 localhost systemd[1]: var-lib-containers-storage-overlay-3ddb8c8a4bfd16d5d2b9440ebf9f9bbd0f4b443343fba1edb54846ce97cdacfb-merged.mount: Deactivated successfully. Nov 23 04:04:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d7b7aabccc0d98ac1bbcd5c0ef60e8fb1d6ddf28e1fc5dff6aaa70f591e4ef2-userdata-shm.mount: Deactivated successfully. Nov 23 04:04:04 localhost python3.9[109805]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:04:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60606 DF PROTO=TCP SPT=54086 DPT=9100 SEQ=2321272305 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B62880F0000000001030307) Nov 23 04:04:04 localhost systemd[1]: Reloading. Nov 23 04:04:04 localhost systemd-sysv-generator[109836]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:04:04 localhost systemd-rc-local-generator[109831]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:04:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:04:04 localhost systemd[1]: Stopping nova_virtproxyd container... Nov 23 04:04:04 localhost systemd[1]: libpod-488b8170845f59de63384fade2a67c676fda84da43aa64baadb668ccfd838e70.scope: Deactivated successfully. Nov 23 04:04:04 localhost podman[109845]: 2025-11-23 09:04:04.695674023 +0000 UTC m=+0.086245957 container died 488b8170845f59de63384fade2a67c676fda84da43aa64baadb668ccfd838e70 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtproxyd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, release=1761123044, build-date=2025-11-19T00:35:22Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 04:04:04 localhost podman[109845]: 2025-11-23 09:04:04.737267103 +0000 UTC m=+0.127839047 container cleanup 488b8170845f59de63384fade2a67c676fda84da43aa64baadb668ccfd838e70 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtproxyd, architecture=x86_64, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git) Nov 23 04:04:04 localhost podman[109845]: nova_virtproxyd Nov 23 04:04:04 localhost podman[109858]: 2025-11-23 09:04:04.788230452 +0000 UTC m=+0.074546166 container cleanup 488b8170845f59de63384fade2a67c676fda84da43aa64baadb668ccfd838e70 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, container_name=nova_virtproxyd) Nov 23 04:04:04 localhost systemd[1]: libpod-conmon-488b8170845f59de63384fade2a67c676fda84da43aa64baadb668ccfd838e70.scope: Deactivated successfully. Nov 23 04:04:04 localhost podman[109887]: error opening file `/run/crun/488b8170845f59de63384fade2a67c676fda84da43aa64baadb668ccfd838e70/status`: No such file or directory Nov 23 04:04:04 localhost podman[109875]: 2025-11-23 09:04:04.893582586 +0000 UTC m=+0.072697589 container cleanup 488b8170845f59de63384fade2a67c676fda84da43aa64baadb668ccfd838e70 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_virtproxyd, release=1761123044, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 04:04:04 localhost podman[109875]: nova_virtproxyd Nov 23 04:04:04 localhost systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully. Nov 23 04:04:04 localhost systemd[1]: Stopped nova_virtproxyd container. Nov 23 04:04:05 localhost systemd[1]: var-lib-containers-storage-overlay-bd7c294b0b164ca1fe098d3dd445b687fad9bf3828ad00cdfc5c993d3ad1f67f-merged.mount: Deactivated successfully. Nov 23 04:04:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-488b8170845f59de63384fade2a67c676fda84da43aa64baadb668ccfd838e70-userdata-shm.mount: Deactivated successfully. Nov 23 04:04:05 localhost python3.9[109982]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:04:05 localhost systemd[1]: Reloading. Nov 23 04:04:05 localhost systemd-rc-local-generator[110005]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:04:05 localhost systemd-sysv-generator[110012]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:04:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:04:06 localhost systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully. Nov 23 04:04:06 localhost systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m. Nov 23 04:04:06 localhost systemd[1]: Stopping nova_virtqemud container... Nov 23 04:04:06 localhost systemd[1]: libpod-dc52681b099784685ed09327fd7b32bf609f0729468596a8cdc7c3c5303316f0.scope: Deactivated successfully. Nov 23 04:04:06 localhost systemd[1]: libpod-dc52681b099784685ed09327fd7b32bf609f0729468596a8cdc7c3c5303316f0.scope: Consumed 2.258s CPU time. Nov 23 04:04:06 localhost podman[110023]: 2025-11-23 09:04:06.149374217 +0000 UTC m=+0.078130126 container died dc52681b099784685ed09327fd7b32bf609f0729468596a8cdc7c3c5303316f0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, distribution-scope=public, config_id=tripleo_step3, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_virtqemud, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Nov 23 04:04:06 localhost systemd[1]: tmp-crun.TTGS76.mount: Deactivated successfully. Nov 23 04:04:06 localhost podman[110023]: 2025-11-23 09:04:06.189672447 +0000 UTC m=+0.118428356 container cleanup dc52681b099784685ed09327fd7b32bf609f0729468596a8cdc7c3c5303316f0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, config_id=tripleo_step3, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 23 04:04:06 localhost podman[110023]: nova_virtqemud Nov 23 04:04:06 localhost podman[110039]: 2025-11-23 09:04:06.242410011 +0000 UTC m=+0.076085893 container cleanup dc52681b099784685ed09327fd7b32bf609f0729468596a8cdc7c3c5303316f0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtqemud, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, version=17.1.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container) Nov 23 04:04:06 localhost systemd[1]: libpod-conmon-dc52681b099784685ed09327fd7b32bf609f0729468596a8cdc7c3c5303316f0.scope: Deactivated successfully. Nov 23 04:04:06 localhost podman[110066]: error opening file `/run/crun/dc52681b099784685ed09327fd7b32bf609f0729468596a8cdc7c3c5303316f0/status`: No such file or directory Nov 23 04:04:06 localhost podman[110053]: 2025-11-23 09:04:06.351272973 +0000 UTC m=+0.074404312 container cleanup dc52681b099784685ed09327fd7b32bf609f0729468596a8cdc7c3c5303316f0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, version=17.1.12, config_id=tripleo_step3, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtqemud, url=https://www.redhat.com) Nov 23 04:04:06 localhost podman[110053]: nova_virtqemud Nov 23 04:04:06 localhost systemd[1]: tripleo_nova_virtqemud.service: Deactivated successfully. Nov 23 04:04:06 localhost systemd[1]: Stopped nova_virtqemud container. Nov 23 04:04:07 localhost systemd[1]: var-lib-containers-storage-overlay-152cc360f83de3db72d1c9916a54c9e86bf0639bd8dcb2e8e7396d7b8e52034e-merged.mount: Deactivated successfully. Nov 23 04:04:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc52681b099784685ed09327fd7b32bf609f0729468596a8cdc7c3c5303316f0-userdata-shm.mount: Deactivated successfully. Nov 23 04:04:07 localhost python3.9[110159]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:04:07 localhost systemd[1]: Reloading. Nov 23 04:04:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27407 DF PROTO=TCP SPT=37044 DPT=9100 SEQ=2357792630 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6293CF0000000001030307) Nov 23 04:04:07 localhost systemd-sysv-generator[110192]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:04:07 localhost systemd-rc-local-generator[110185]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:04:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:04:08 localhost python3.9[110289]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:04:08 localhost systemd[1]: Reloading. Nov 23 04:04:08 localhost systemd-rc-local-generator[110315]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:04:08 localhost systemd-sysv-generator[110318]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:04:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:04:08 localhost systemd[1]: Stopping nova_virtsecretd container... Nov 23 04:04:08 localhost systemd[1]: libpod-e475c8f973c2b3dcaf28c5c432a75d191ea8288cbf71705e00702a1b348fda84.scope: Deactivated successfully. Nov 23 04:04:08 localhost podman[110330]: 2025-11-23 09:04:08.80358156 +0000 UTC m=+0.087197376 container died e475c8f973c2b3dcaf28c5c432a75d191ea8288cbf71705e00702a1b348fda84 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, container_name=nova_virtsecretd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true) Nov 23 04:04:08 localhost systemd[1]: tmp-crun.oh18Am.mount: Deactivated successfully. Nov 23 04:04:08 localhost podman[110330]: 2025-11-23 09:04:08.849732071 +0000 UTC m=+0.133347877 container cleanup e475c8f973c2b3dcaf28c5c432a75d191ea8288cbf71705e00702a1b348fda84 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, container_name=nova_virtsecretd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, version=17.1.12, url=https://www.redhat.com) Nov 23 04:04:08 localhost podman[110330]: nova_virtsecretd Nov 23 04:04:08 localhost podman[110343]: 2025-11-23 09:04:08.897496662 +0000 UTC m=+0.079019824 container cleanup e475c8f973c2b3dcaf28c5c432a75d191ea8288cbf71705e00702a1b348fda84 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, container_name=nova_virtsecretd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Nov 23 04:04:08 localhost systemd[1]: libpod-conmon-e475c8f973c2b3dcaf28c5c432a75d191ea8288cbf71705e00702a1b348fda84.scope: Deactivated successfully. Nov 23 04:04:09 localhost podman[110374]: error opening file `/run/crun/e475c8f973c2b3dcaf28c5c432a75d191ea8288cbf71705e00702a1b348fda84/status`: No such file or directory Nov 23 04:04:09 localhost podman[110360]: 2025-11-23 09:04:09.009810559 +0000 UTC m=+0.079164658 container cleanup e475c8f973c2b3dcaf28c5c432a75d191ea8288cbf71705e00702a1b348fda84 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, com.redhat.component=openstack-nova-libvirt-container, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, tcib_managed=true, container_name=nova_virtsecretd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 23 04:04:09 localhost podman[110360]: nova_virtsecretd Nov 23 04:04:09 localhost systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully. Nov 23 04:04:09 localhost systemd[1]: Stopped nova_virtsecretd container. Nov 23 04:04:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43396 DF PROTO=TCP SPT=36372 DPT=9105 SEQ=3258578085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B629D900000000001030307) Nov 23 04:04:09 localhost systemd[1]: var-lib-containers-storage-overlay-1e6bca165dc068117408960944f8051008cf6e2ead3af3bab76d84f596641295-merged.mount: Deactivated successfully. Nov 23 04:04:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e475c8f973c2b3dcaf28c5c432a75d191ea8288cbf71705e00702a1b348fda84-userdata-shm.mount: Deactivated successfully. Nov 23 04:04:09 localhost python3.9[110468]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:04:09 localhost systemd-rc-local-generator[110492]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:04:09 localhost systemd-sysv-generator[110496]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:04:09 localhost systemd[1]: Reloading. Nov 23 04:04:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:04:10 localhost systemd[1]: Stopping nova_virtstoraged container... Nov 23 04:04:10 localhost systemd[1]: libpod-f7f342695ef671a45e2a945017cb65f35bee7bea5c9dac6aa7c9fb96c91f222a.scope: Deactivated successfully. Nov 23 04:04:10 localhost podman[110508]: 2025-11-23 09:04:10.283635662 +0000 UTC m=+0.082253230 container died f7f342695ef671a45e2a945017cb65f35bee7bea5c9dac6aa7c9fb96c91f222a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=nova_virtstoraged, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4) Nov 23 04:04:10 localhost podman[110508]: 2025-11-23 09:04:10.321558716 +0000 UTC m=+0.120176284 container cleanup f7f342695ef671a45e2a945017cb65f35bee7bea5c9dac6aa7c9fb96c91f222a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step3, container_name=nova_virtstoraged, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, architecture=x86_64) Nov 23 04:04:10 localhost podman[110508]: nova_virtstoraged Nov 23 04:04:10 localhost podman[110522]: 2025-11-23 09:04:10.367688246 +0000 UTC m=+0.073840859 container cleanup f7f342695ef671a45e2a945017cb65f35bee7bea5c9dac6aa7c9fb96c91f222a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, build-date=2025-11-19T00:35:22Z, container_name=nova_virtstoraged, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-type=git, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step3) Nov 23 04:04:10 localhost systemd[1]: libpod-conmon-f7f342695ef671a45e2a945017cb65f35bee7bea5c9dac6aa7c9fb96c91f222a.scope: Deactivated successfully. Nov 23 04:04:10 localhost podman[110549]: error opening file `/run/crun/f7f342695ef671a45e2a945017cb65f35bee7bea5c9dac6aa7c9fb96c91f222a/status`: No such file or directory Nov 23 04:04:10 localhost podman[110538]: 2025-11-23 09:04:10.472470521 +0000 UTC m=+0.073055604 container cleanup f7f342695ef671a45e2a945017cb65f35bee7bea5c9dac6aa7c9fb96c91f222a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, release=1761123044, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtstoraged, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b43218eec4380850a20e0a337fdcf6cf'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}) Nov 23 04:04:10 localhost podman[110538]: nova_virtstoraged Nov 23 04:04:10 localhost systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully. Nov 23 04:04:10 localhost systemd[1]: Stopped nova_virtstoraged container. Nov 23 04:04:10 localhost systemd[1]: var-lib-containers-storage-overlay-905224559a7d60475da0e4c3f30d7498ad95ab3c6c514578f0762871cf74312f-merged.mount: Deactivated successfully. Nov 23 04:04:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f7f342695ef671a45e2a945017cb65f35bee7bea5c9dac6aa7c9fb96c91f222a-userdata-shm.mount: Deactivated successfully. Nov 23 04:04:11 localhost python3.9[110642]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:04:11 localhost systemd[1]: Reloading. Nov 23 04:04:11 localhost systemd-sysv-generator[110672]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:04:11 localhost systemd-rc-local-generator[110667]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:04:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:04:11 localhost systemd[1]: Stopping ovn_controller container... Nov 23 04:04:11 localhost systemd[1]: libpod-e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.scope: Deactivated successfully. Nov 23 04:04:11 localhost systemd[1]: libpod-e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.scope: Consumed 2.668s CPU time. Nov 23 04:04:11 localhost podman[110682]: 2025-11-23 09:04:11.669665106 +0000 UTC m=+0.080112172 container died e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, container_name=ovn_controller, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc.) Nov 23 04:04:11 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.timer: Deactivated successfully. Nov 23 04:04:11 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736. Nov 23 04:04:11 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed to open /run/systemd/transient/e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: No such file or directory Nov 23 04:04:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736-userdata-shm.mount: Deactivated successfully. Nov 23 04:04:11 localhost podman[110682]: 2025-11-23 09:04:11.718723886 +0000 UTC m=+0.129170942 container cleanup e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 23 04:04:11 localhost podman[110682]: ovn_controller Nov 23 04:04:11 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.timer: Failed to open /run/systemd/transient/e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.timer: No such file or directory Nov 23 04:04:11 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed to open /run/systemd/transient/e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: No such file or directory Nov 23 04:04:11 localhost podman[110695]: 2025-11-23 09:04:11.758212629 +0000 UTC m=+0.078755550 container cleanup e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, version=17.1.12, container_name=ovn_controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 04:04:11 localhost systemd[1]: libpod-conmon-e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.scope: Deactivated successfully. Nov 23 04:04:11 localhost systemd[1]: var-lib-containers-storage-overlay-79cb65f3d881d4025c031ad58fd79dbe3fe721b3499b4f6bf264330e3666efe6-merged.mount: Deactivated successfully. Nov 23 04:04:11 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.timer: Failed to open /run/systemd/transient/e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.timer: No such file or directory Nov 23 04:04:11 localhost systemd[1]: e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: Failed to open /run/systemd/transient/e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736.service: No such file or directory Nov 23 04:04:11 localhost podman[110709]: 2025-11-23 09:04:11.865079499 +0000 UTC m=+0.073568199 container cleanup e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4) Nov 23 04:04:11 localhost podman[110709]: ovn_controller Nov 23 04:04:11 localhost systemd[1]: tripleo_ovn_controller.service: Deactivated successfully. Nov 23 04:04:11 localhost systemd[1]: Stopped ovn_controller container. Nov 23 04:04:12 localhost python3.9[110812]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:04:12 localhost systemd[1]: Reloading. Nov 23 04:04:12 localhost systemd-rc-local-generator[110844]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:04:12 localhost systemd-sysv-generator[110847]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:04:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:04:13 localhost systemd[1]: Stopping ovn_metadata_agent container... Nov 23 04:04:13 localhost systemd[1]: libpod-f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.scope: Deactivated successfully. Nov 23 04:04:13 localhost systemd[1]: libpod-f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.scope: Consumed 9.473s CPU time. Nov 23 04:04:13 localhost podman[110853]: 2025-11-23 09:04:13.515761552 +0000 UTC m=+0.463791537 container died f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 04:04:13 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.timer: Deactivated successfully. Nov 23 04:04:13 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745. Nov 23 04:04:13 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed to open /run/systemd/transient/f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: No such file or directory Nov 23 04:04:13 localhost podman[110853]: 2025-11-23 09:04:13.642678093 +0000 UTC m=+0.590708028 container cleanup f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, io.openshift.expose-services=, container_name=ovn_metadata_agent) Nov 23 04:04:13 localhost podman[110853]: ovn_metadata_agent Nov 23 04:04:13 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.timer: Failed to open /run/systemd/transient/f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.timer: No such file or directory Nov 23 04:04:13 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed to open /run/systemd/transient/f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: No such file or directory Nov 23 04:04:13 localhost podman[110865]: 2025-11-23 09:04:13.671492726 +0000 UTC m=+0.140647578 container cleanup f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Nov 23 04:04:13 localhost systemd[1]: libpod-conmon-f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.scope: Deactivated successfully. Nov 23 04:04:13 localhost podman[110898]: error opening file `/run/crun/f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745/status`: No such file or directory Nov 23 04:04:13 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.timer: Failed to open /run/systemd/transient/f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.timer: No such file or directory Nov 23 04:04:13 localhost systemd[1]: f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: Failed to open /run/systemd/transient/f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745.service: No such file or directory Nov 23 04:04:13 localhost podman[110886]: 2025-11-23 09:04:13.797163229 +0000 UTC m=+0.090057170 container cleanup f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, distribution-scope=public) Nov 23 04:04:13 localhost podman[110886]: ovn_metadata_agent Nov 23 04:04:13 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Deactivated successfully. Nov 23 04:04:13 localhost systemd[1]: Stopped ovn_metadata_agent container. Nov 23 04:04:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43397 DF PROTO=TCP SPT=36372 DPT=9105 SEQ=3258578085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B62AD500000000001030307) Nov 23 04:04:14 localhost systemd[1]: var-lib-containers-storage-overlay-cba8ac6fa9e418015cb2e81f4c2ff1b5d5fce1183c5cd7f7a69873fc39e13080-merged.mount: Deactivated successfully. Nov 23 04:04:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745-userdata-shm.mount: Deactivated successfully. Nov 23 04:04:14 localhost python3.9[110993]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:04:14 localhost systemd[1]: Reloading. Nov 23 04:04:14 localhost systemd-rc-local-generator[111020]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:04:14 localhost systemd-sysv-generator[111025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:04:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:04:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29775 DF PROTO=TCP SPT=33534 DPT=9882 SEQ=3699566571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B62BA130000000001030307) Nov 23 04:04:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25868 DF PROTO=TCP SPT=42176 DPT=9102 SEQ=2877569014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B62C2F70000000001030307) Nov 23 04:04:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13714 DF PROTO=TCP SPT=34938 DPT=9101 SEQ=3817983328 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B62CE0F0000000001030307) Nov 23 04:04:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48227 DF PROTO=TCP SPT=49952 DPT=9101 SEQ=3331389082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B62D80F0000000001030307) Nov 23 04:04:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56119 DF PROTO=TCP SPT=34308 DPT=9100 SEQ=3253395387 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B62ED3E0000000001030307) Nov 23 04:04:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56120 DF PROTO=TCP SPT=34308 DPT=9100 SEQ=3253395387 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B62F14F0000000001030307) Nov 23 04:04:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27575 DF PROTO=TCP SPT=40280 DPT=9100 SEQ=1997505800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B62FC0F0000000001030307) Nov 23 04:04:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56122 DF PROTO=TCP SPT=34308 DPT=9100 SEQ=3253395387 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B63090F0000000001030307) Nov 23 04:04:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23714 DF PROTO=TCP SPT=35464 DPT=9105 SEQ=957795326 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6312CF0000000001030307) Nov 23 04:04:42 localhost sshd[111045]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:04:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23715 DF PROTO=TCP SPT=35464 DPT=9105 SEQ=957795326 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B63228F0000000001030307) Nov 23 04:04:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56123 DF PROTO=TCP SPT=34308 DPT=9100 SEQ=3253395387 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B632A100000000001030307) Nov 23 04:04:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29780 DF PROTO=TCP SPT=33534 DPT=9882 SEQ=3699566571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B63360F0000000001030307) Nov 23 04:04:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23716 DF PROTO=TCP SPT=35464 DPT=9105 SEQ=957795326 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B63420F0000000001030307) Nov 23 04:04:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10812 DF PROTO=TCP SPT=51116 DPT=9102 SEQ=3136707997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6353CF0000000001030307) Nov 23 04:05:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2071 DF PROTO=TCP SPT=50868 DPT=9100 SEQ=1869702294 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B63626E0000000001030307) Nov 23 04:05:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2072 DF PROTO=TCP SPT=50868 DPT=9100 SEQ=1869702294 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6366900000000001030307) Nov 23 04:05:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27410 DF PROTO=TCP SPT=37044 DPT=9100 SEQ=2357792630 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B63720F0000000001030307) Nov 23 04:05:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2074 DF PROTO=TCP SPT=50868 DPT=9100 SEQ=1869702294 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B637E500000000001030307) Nov 23 04:05:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21816 DF PROTO=TCP SPT=50044 DPT=9105 SEQ=680463417 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B63880F0000000001030307) Nov 23 04:05:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21817 DF PROTO=TCP SPT=50044 DPT=9105 SEQ=680463417 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6397D00000000001030307) Nov 23 04:05:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17221 DF PROTO=TCP SPT=60222 DPT=9882 SEQ=379640098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B63A4730000000001030307) Nov 23 04:05:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20123 DF PROTO=TCP SPT=39976 DPT=9882 SEQ=3458105517 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B63AC0F0000000001030307) Nov 23 04:05:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47357 DF PROTO=TCP SPT=35582 DPT=9101 SEQ=2822442430 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B63B8100000000001030307) Nov 23 04:05:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57097 DF PROTO=TCP SPT=49248 DPT=9102 SEQ=4238317900 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B63C90F0000000001030307) Nov 23 04:05:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40325 DF PROTO=TCP SPT=34390 DPT=9100 SEQ=2410233528 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B63D79E0000000001030307) Nov 23 04:05:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40326 DF PROTO=TCP SPT=34390 DPT=9100 SEQ=2410233528 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B63DB8F0000000001030307) Nov 23 04:05:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56125 DF PROTO=TCP SPT=34308 DPT=9100 SEQ=3253395387 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B63E80F0000000001030307) Nov 23 04:05:35 localhost systemd[1]: session-36.scope: Deactivated successfully. Nov 23 04:05:35 localhost systemd[1]: session-36.scope: Consumed 18.966s CPU time. Nov 23 04:05:35 localhost systemd-logind[760]: Session 36 logged out. Waiting for processes to exit. Nov 23 04:05:35 localhost systemd-logind[760]: Removed session 36. Nov 23 04:05:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40328 DF PROTO=TCP SPT=34390 DPT=9100 SEQ=2410233528 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B63F34F0000000001030307) Nov 23 04:05:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55181 DF PROTO=TCP SPT=44078 DPT=9105 SEQ=2188813190 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B63FD4F0000000001030307) Nov 23 04:05:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55182 DF PROTO=TCP SPT=44078 DPT=9105 SEQ=2188813190 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B640D0F0000000001030307) Nov 23 04:05:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46160 DF PROTO=TCP SPT=59100 DPT=9882 SEQ=194257186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6419A30000000001030307) Nov 23 04:05:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17226 DF PROTO=TCP SPT=60222 DPT=9882 SEQ=379640098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B64200F0000000001030307) Nov 23 04:05:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55183 DF PROTO=TCP SPT=44078 DPT=9105 SEQ=2188813190 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B642E0F0000000001030307) Nov 23 04:05:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43950 DF PROTO=TCP SPT=54662 DPT=9102 SEQ=2442047980 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B643E500000000001030307) Nov 23 04:06:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7120 DF PROTO=TCP SPT=37086 DPT=9100 SEQ=1949862508 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B644CCE0000000001030307) Nov 23 04:06:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7121 DF PROTO=TCP SPT=37086 DPT=9100 SEQ=1949862508 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6450D00000000001030307) Nov 23 04:06:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2077 DF PROTO=TCP SPT=50868 DPT=9100 SEQ=1869702294 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B645C0F0000000001030307) Nov 23 04:06:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7123 DF PROTO=TCP SPT=37086 DPT=9100 SEQ=1949862508 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B64688F0000000001030307) Nov 23 04:06:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7906 DF PROTO=TCP SPT=36630 DPT=9105 SEQ=2162342130 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B64724F0000000001030307) Nov 23 04:06:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7907 DF PROTO=TCP SPT=36630 DPT=9105 SEQ=2162342130 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B64820F0000000001030307) Nov 23 04:06:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36261 DF PROTO=TCP SPT=34152 DPT=9882 SEQ=269850431 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B648ED20000000001030307) Nov 23 04:06:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46165 DF PROTO=TCP SPT=59100 DPT=9882 SEQ=194257186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B64960F0000000001030307) Nov 23 04:06:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7908 DF PROTO=TCP SPT=36630 DPT=9105 SEQ=2162342130 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B64A2100000000001030307) Nov 23 04:06:22 localhost sshd[111250]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:06:22 localhost systemd-logind[760]: New session 37 of user zuul. Nov 23 04:06:22 localhost systemd[1]: Started Session 37 of User zuul. Nov 23 04:06:22 localhost python3.9[111331]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:23 localhost python3.9[111423]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:24 localhost python3.9[111515]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:24 localhost python3.9[111607]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:25 localhost python3.9[111699]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:25 localhost python3.9[111791]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46376 DF PROTO=TCP SPT=40966 DPT=9102 SEQ=3283095643 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B64B38F0000000001030307) Nov 23 04:06:26 localhost python3.9[111883]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:27 localhost python3.9[111975]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:27 localhost python3.9[112067]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:28 localhost python3.9[112159]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:29 localhost python3.9[112251]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:29 localhost python3.9[112343]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5512 DF PROTO=TCP SPT=48226 DPT=9100 SEQ=2379770603 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B64C1FE0000000001030307) Nov 23 04:06:30 localhost python3.9[112435]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:31 localhost python3.9[112527]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5513 DF PROTO=TCP SPT=48226 DPT=9100 SEQ=2379770603 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B64C60F0000000001030307) Nov 23 04:06:31 localhost python3.9[112619]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:32 localhost python3.9[112711]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:33 localhost python3.9[112803]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:33 localhost python3.9[112895]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40331 DF PROTO=TCP SPT=34390 DPT=9100 SEQ=2410233528 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B64D20F0000000001030307) Nov 23 04:06:34 localhost python3.9[112987]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:34 localhost python3.9[113079]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:35 localhost python3.9[113171]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:36 localhost python3.9[113263]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:36 localhost python3.9[113355]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5515 DF PROTO=TCP SPT=48226 DPT=9100 SEQ=2379770603 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B64DDCF0000000001030307) Nov 23 04:06:37 localhost python3.9[113447]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:38 localhost python3.9[113539]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:38 localhost python3.9[113631]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:39 localhost python3.9[113723]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51531 DF PROTO=TCP SPT=33474 DPT=9105 SEQ=3890067455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B64E78F0000000001030307) Nov 23 04:06:40 localhost python3.9[113815]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:40 localhost python3.9[113907]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:41 localhost python3.9[113999]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:41 localhost python3.9[114091]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:42 localhost python3.9[114183]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:43 localhost python3.9[114275]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51532 DF PROTO=TCP SPT=33474 DPT=9105 SEQ=3890067455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B64F74F0000000001030307) Nov 23 04:06:43 localhost python3.9[114367]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:44 localhost python3.9[114459]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:45 localhost python3.9[114551]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:45 localhost python3.9[114643]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:46 localhost python3.9[114735]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:46 localhost python3.9[114827]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51007 DF PROTO=TCP SPT=33188 DPT=9882 SEQ=1118172170 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6504030000000001030307) Nov 23 04:06:47 localhost python3.9[114919]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:48 localhost python3.9[115011]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:49 localhost python3.9[115103]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:06:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41146 DF PROTO=TCP SPT=35062 DPT=9102 SEQ=99921495 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B650CE40000000001030307) Nov 23 04:06:50 localhost python3.9[115195]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:06:51 localhost python3.9[115287]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 23 04:06:52 localhost python3.9[115379]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:06:52 localhost systemd[1]: Reloading. Nov 23 04:06:52 localhost systemd-rc-local-generator[115434]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:06:52 localhost systemd-sysv-generator[115439]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:06:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:06:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25795 DF PROTO=TCP SPT=57452 DPT=9101 SEQ=2713822403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B65180F0000000001030307) Nov 23 04:06:52 localhost podman[115550]: 2025-11-23 09:06:52.81896048 +0000 UTC m=+0.098117230 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, io.buildah.version=1.33.12, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, version=7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, distribution-scope=public, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, architecture=x86_64) Nov 23 04:06:52 localhost podman[115550]: 2025-11-23 09:06:52.906430383 +0000 UTC m=+0.185587133 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_CLEAN=True, vcs-type=git, version=7, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph) Nov 23 04:06:53 localhost python3.9[115652]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:06:53 localhost python3.9[115809]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:06:54 localhost python3.9[115923]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:06:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44203 DF PROTO=TCP SPT=45212 DPT=9101 SEQ=1801703644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6522100000000001030307) Nov 23 04:06:56 localhost python3.9[116031]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:06:56 localhost python3.9[116124]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:06:57 localhost python3.9[116217]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:06:57 localhost python3.9[116310]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:06:58 localhost python3.9[116403]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:06:59 localhost python3.9[116496]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:06:59 localhost python3.9[116589]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:07:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14484 DF PROTO=TCP SPT=59546 DPT=9100 SEQ=937230638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B65372E0000000001030307) Nov 23 04:07:00 localhost python3.9[116682]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:07:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14485 DF PROTO=TCP SPT=59546 DPT=9100 SEQ=937230638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B653B500000000001030307) Nov 23 04:07:02 localhost python3.9[116775]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:07:02 localhost python3.9[116868]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:07:03 localhost python3.9[116961]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:07:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7126 DF PROTO=TCP SPT=37086 DPT=9100 SEQ=1949862508 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B65460F0000000001030307) Nov 23 04:07:04 localhost python3.9[117054]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:07:04 localhost python3.9[117147]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:07:05 localhost python3.9[117240]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:07:05 localhost python3.9[117333]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:07:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14487 DF PROTO=TCP SPT=59546 DPT=9100 SEQ=937230638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B65530F0000000001030307) Nov 23 04:07:07 localhost python3.9[117426]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:07:09 localhost python3.9[117519]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:07:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16094 DF PROTO=TCP SPT=44748 DPT=9105 SEQ=2406667911 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B655CCF0000000001030307) Nov 23 04:07:10 localhost python3.9[117612]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:07:11 localhost systemd[1]: session-37.scope: Deactivated successfully. Nov 23 04:07:11 localhost systemd[1]: session-37.scope: Consumed 30.701s CPU time. Nov 23 04:07:11 localhost systemd-logind[760]: Session 37 logged out. Waiting for processes to exit. Nov 23 04:07:11 localhost systemd-logind[760]: Removed session 37. Nov 23 04:07:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16095 DF PROTO=TCP SPT=44748 DPT=9105 SEQ=2406667911 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B656C8F0000000001030307) Nov 23 04:07:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14488 DF PROTO=TCP SPT=59546 DPT=9100 SEQ=937230638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6574100000000001030307) Nov 23 04:07:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51012 DF PROTO=TCP SPT=33188 DPT=9882 SEQ=1118172170 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B65800F0000000001030307) Nov 23 04:07:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16096 DF PROTO=TCP SPT=44748 DPT=9105 SEQ=2406667911 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B658C0F0000000001030307) Nov 23 04:07:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63028 DF PROTO=TCP SPT=54756 DPT=9102 SEQ=942176839 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B659DD00000000001030307) Nov 23 04:07:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41621 DF PROTO=TCP SPT=53924 DPT=9100 SEQ=899660987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B65AC5E0000000001030307) Nov 23 04:07:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41622 DF PROTO=TCP SPT=53924 DPT=9100 SEQ=899660987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B65B04F0000000001030307) Nov 23 04:07:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5518 DF PROTO=TCP SPT=48226 DPT=9100 SEQ=2379770603 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B65BC0F0000000001030307) Nov 23 04:07:36 localhost sshd[117628]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:07:36 localhost systemd-logind[760]: New session 38 of user zuul. Nov 23 04:07:36 localhost systemd[1]: Started Session 38 of User zuul. Nov 23 04:07:37 localhost python3.9[117721]: ansible-ansible.legacy.ping Invoked with data=pong Nov 23 04:07:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41624 DF PROTO=TCP SPT=53924 DPT=9100 SEQ=899660987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B65C8100000000001030307) Nov 23 04:07:38 localhost python3.9[117825]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:07:39 localhost python3.9[117917]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:07:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47821 DF PROTO=TCP SPT=57780 DPT=9105 SEQ=3914732160 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B65D20F0000000001030307) Nov 23 04:07:40 localhost python3.9[118010]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:07:40 localhost python3.9[118102]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:07:41 localhost python3.9[118194]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:07:42 localhost python3.9[118267]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763888861.123149-177-30539507180141/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:07:43 localhost python3.9[118359]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:07:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47822 DF PROTO=TCP SPT=57780 DPT=9105 SEQ=3914732160 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B65E1CF0000000001030307) Nov 23 04:07:44 localhost python3.9[118455]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:07:44 localhost python3.9[118547]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:07:45 localhost python3.9[118637]: ansible-ansible.builtin.service_facts Invoked Nov 23 04:07:45 localhost network[118654]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 23 04:07:45 localhost network[118655]: 'network-scripts' will be removed from distribution in near future. Nov 23 04:07:45 localhost network[118656]: It is advised to switch to 'NetworkManager' instead for network management. Nov 23 04:07:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:07:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46811 DF PROTO=TCP SPT=58710 DPT=9882 SEQ=1107352018 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B65EE640000000001030307) Nov 23 04:07:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38564 DF PROTO=TCP SPT=48792 DPT=9882 SEQ=391120336 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B65F60F0000000001030307) Nov 23 04:07:49 localhost python3.9[118854]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:07:50 localhost python3.9[118944]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:07:51 localhost python3.9[119040]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream#012set -euxo pipefail#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main#012# This is required for FIPS enabled until trunk.rdoproject.org#012# is not being served from a centos7 host, tracked by#012# https://issues.redhat.com/browse/RHOSZUUL-1517#012dnf -y install crypto-policies#012update-crypto-policies --set FIPS:NO-ENFORCE-EMS#012./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream#012#012# Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible#012# with rhel 9.2 openssh#012dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save#012# FIXME: perform dnf upgrade for other packages in EDPM ansible#012# here we only ensuring that decontainerized libvirt can start#012dnf -y upgrade openstack-selinux#012rm -f /run/virtlogd.pid#012#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:07:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47823 DF PROTO=TCP SPT=57780 DPT=9105 SEQ=3914732160 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B66020F0000000001030307) Nov 23 04:07:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54938 DF PROTO=TCP SPT=40052 DPT=9102 SEQ=2755531546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B66130F0000000001030307) Nov 23 04:08:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1183 DF PROTO=TCP SPT=56246 DPT=9100 SEQ=3930521557 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B66218E0000000001030307) Nov 23 04:08:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1184 DF PROTO=TCP SPT=56246 DPT=9100 SEQ=3930521557 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6625900000000001030307) Nov 23 04:08:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14490 DF PROTO=TCP SPT=59546 DPT=9100 SEQ=937230638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B66320F0000000001030307) Nov 23 04:08:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1186 DF PROTO=TCP SPT=56246 DPT=9100 SEQ=3930521557 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B663D500000000001030307) Nov 23 04:08:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7975 DF PROTO=TCP SPT=40008 DPT=9105 SEQ=376335604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B66470F0000000001030307) Nov 23 04:08:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7976 DF PROTO=TCP SPT=40008 DPT=9105 SEQ=376335604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6656CF0000000001030307) Nov 23 04:08:16 localhost systemd[1]: Stopping OpenSSH server daemon... Nov 23 04:08:16 localhost systemd[1]: sshd.service: Deactivated successfully. Nov 23 04:08:16 localhost systemd[1]: Stopped OpenSSH server daemon. Nov 23 04:08:16 localhost systemd[1]: sshd.service: Consumed 1.060s CPU time. Nov 23 04:08:16 localhost systemd[1]: Stopped target sshd-keygen.target. Nov 23 04:08:16 localhost systemd[1]: Stopping sshd-keygen.target... Nov 23 04:08:16 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 23 04:08:16 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 23 04:08:16 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 23 04:08:16 localhost systemd[1]: Reached target sshd-keygen.target. Nov 23 04:08:16 localhost systemd[1]: Starting OpenSSH server daemon... Nov 23 04:08:16 localhost sshd[119160]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:08:16 localhost systemd[1]: Started OpenSSH server daemon. Nov 23 04:08:16 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 04:08:16 localhost systemd[1]: Starting man-db-cache-update.service... Nov 23 04:08:16 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 04:08:16 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 23 04:08:16 localhost systemd[1]: Finished man-db-cache-update.service. Nov 23 04:08:16 localhost systemd[1]: run-r4230a4889fd047f59cd914d64b3e62df.service: Deactivated successfully. Nov 23 04:08:16 localhost systemd[1]: run-rd390f921c3b64e5d8737770a74860013.service: Deactivated successfully. Nov 23 04:08:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32109 DF PROTO=TCP SPT=54332 DPT=9882 SEQ=2543086702 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6663920000000001030307) Nov 23 04:08:17 localhost systemd[1]: Stopping OpenSSH server daemon... Nov 23 04:08:17 localhost systemd[1]: sshd.service: Deactivated successfully. Nov 23 04:08:17 localhost systemd[1]: Stopped OpenSSH server daemon. Nov 23 04:08:17 localhost systemd[1]: Stopped target sshd-keygen.target. Nov 23 04:08:17 localhost systemd[1]: Stopping sshd-keygen.target... Nov 23 04:08:17 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 23 04:08:17 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 23 04:08:17 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 23 04:08:17 localhost systemd[1]: Reached target sshd-keygen.target. Nov 23 04:08:17 localhost systemd[1]: Starting OpenSSH server daemon... Nov 23 04:08:17 localhost sshd[119331]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:08:17 localhost systemd[1]: Started OpenSSH server daemon. Nov 23 04:08:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46816 DF PROTO=TCP SPT=58710 DPT=9882 SEQ=1107352018 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B666A0F0000000001030307) Nov 23 04:08:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7977 DF PROTO=TCP SPT=40008 DPT=9105 SEQ=376335604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B66780F0000000001030307) Nov 23 04:08:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3400 DF PROTO=TCP SPT=42052 DPT=9102 SEQ=427322581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6688500000000001030307) Nov 23 04:08:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2160 DF PROTO=TCP SPT=53564 DPT=9100 SEQ=984513340 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6696BE0000000001030307) Nov 23 04:08:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2161 DF PROTO=TCP SPT=53564 DPT=9100 SEQ=984513340 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B669ACF0000000001030307) Nov 23 04:08:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41627 DF PROTO=TCP SPT=53924 DPT=9100 SEQ=899660987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B66A60F0000000001030307) Nov 23 04:08:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2163 DF PROTO=TCP SPT=53564 DPT=9100 SEQ=984513340 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B66B28F0000000001030307) Nov 23 04:08:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61485 DF PROTO=TCP SPT=53268 DPT=9105 SEQ=1440408481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B66BC500000000001030307) Nov 23 04:08:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61486 DF PROTO=TCP SPT=53268 DPT=9105 SEQ=1440408481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B66CC0F0000000001030307) Nov 23 04:08:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8598 DF PROTO=TCP SPT=49708 DPT=9882 SEQ=943511215 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B66D8C30000000001030307) Nov 23 04:08:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32114 DF PROTO=TCP SPT=54332 DPT=9882 SEQ=2543086702 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B66E00F0000000001030307) Nov 23 04:08:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61487 DF PROTO=TCP SPT=53268 DPT=9105 SEQ=1440408481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B66EC0F0000000001030307) Nov 23 04:08:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64407 DF PROTO=TCP SPT=54448 DPT=9102 SEQ=4262472419 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B66FD500000000001030307) Nov 23 04:09:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59746 DF PROTO=TCP SPT=59730 DPT=9100 SEQ=2229786657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B670BEE0000000001030307) Nov 23 04:09:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59747 DF PROTO=TCP SPT=59730 DPT=9100 SEQ=2229786657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B67100F0000000001030307) Nov 23 04:09:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1189 DF PROTO=TCP SPT=56246 DPT=9100 SEQ=3930521557 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B671C100000000001030307) Nov 23 04:09:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59749 DF PROTO=TCP SPT=59730 DPT=9100 SEQ=2229786657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6727CF0000000001030307) Nov 23 04:09:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50261 DF PROTO=TCP SPT=47492 DPT=9105 SEQ=3822752319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6731900000000001030307) Nov 23 04:09:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50262 DF PROTO=TCP SPT=47492 DPT=9105 SEQ=3822752319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B67414F0000000001030307) Nov 23 04:09:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49854 DF PROTO=TCP SPT=59378 DPT=9882 SEQ=99676404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B674DF30000000001030307) Nov 23 04:09:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23669 DF PROTO=TCP SPT=56944 DPT=9102 SEQ=2435751458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6756D50000000001030307) Nov 23 04:09:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45399 DF PROTO=TCP SPT=37454 DPT=9101 SEQ=1100365899 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B67620F0000000001030307) Nov 23 04:09:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=603 DF PROTO=TCP SPT=50506 DPT=9101 SEQ=3356340148 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B676C0F0000000001030307) Nov 23 04:09:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13578 DF PROTO=TCP SPT=49676 DPT=9100 SEQ=1172121438 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B67811E0000000001030307) Nov 23 04:09:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13579 DF PROTO=TCP SPT=49676 DPT=9100 SEQ=1172121438 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B67850F0000000001030307) Nov 23 04:09:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2166 DF PROTO=TCP SPT=53564 DPT=9100 SEQ=984513340 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6790100000000001030307) Nov 23 04:09:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13581 DF PROTO=TCP SPT=49676 DPT=9100 SEQ=1172121438 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B679CCF0000000001030307) Nov 23 04:09:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13209 DF PROTO=TCP SPT=54750 DPT=9105 SEQ=690224277 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B67A6D00000000001030307) Nov 23 04:09:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13210 DF PROTO=TCP SPT=54750 DPT=9105 SEQ=690224277 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B67B68F0000000001030307) Nov 23 04:09:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13582 DF PROTO=TCP SPT=49676 DPT=9100 SEQ=1172121438 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B67BE100000000001030307) Nov 23 04:09:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49859 DF PROTO=TCP SPT=59378 DPT=9882 SEQ=99676404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B67CA0F0000000001030307) Nov 23 04:09:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13211 DF PROTO=TCP SPT=54750 DPT=9105 SEQ=690224277 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B67D60F0000000001030307) Nov 23 04:09:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31898 DF PROTO=TCP SPT=43004 DPT=9102 SEQ=3752222699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B67E7D00000000001030307) Nov 23 04:10:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16986 DF PROTO=TCP SPT=43802 DPT=9100 SEQ=4281792712 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B67F64F0000000001030307) Nov 23 04:10:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16987 DF PROTO=TCP SPT=43802 DPT=9100 SEQ=4281792712 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B67FA500000000001030307) Nov 23 04:10:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59752 DF PROTO=TCP SPT=59730 DPT=9100 SEQ=2229786657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B68060F0000000001030307) Nov 23 04:10:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16989 DF PROTO=TCP SPT=43802 DPT=9100 SEQ=4281792712 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B68120F0000000001030307) Nov 23 04:10:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51341 DF PROTO=TCP SPT=42256 DPT=9105 SEQ=53537755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B681BCF0000000001030307) Nov 23 04:10:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51342 DF PROTO=TCP SPT=42256 DPT=9105 SEQ=53537755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B682B900000000001030307) Nov 23 04:10:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20964 DF PROTO=TCP SPT=59106 DPT=9882 SEQ=3243868483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6838530000000001030307) Nov 23 04:10:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=612 DF PROTO=TCP SPT=53664 DPT=9882 SEQ=3078842990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B68400F0000000001030307) Nov 23 04:10:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51343 DF PROTO=TCP SPT=42256 DPT=9105 SEQ=53537755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B684C0F0000000001030307) Nov 23 04:10:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26131 DF PROTO=TCP SPT=53838 DPT=9102 SEQ=2911074026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B685D0F0000000001030307) Nov 23 04:10:28 localhost kernel: SELinux: Converting 2741 SID table entries... Nov 23 04:10:28 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 04:10:28 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 04:10:28 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 04:10:28 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 04:10:28 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 04:10:28 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 04:10:28 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 04:10:30 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=17 res=1 Nov 23 04:10:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17512 DF PROTO=TCP SPT=48188 DPT=9100 SEQ=1637596242 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B686B7F0000000001030307) Nov 23 04:10:30 localhost python3.9[120061]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:10:31 localhost python3.9[120153]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:10:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17513 DF PROTO=TCP SPT=48188 DPT=9100 SEQ=1637596242 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B686F8F0000000001030307) Nov 23 04:10:31 localhost python3.9[120226]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889030.5689702-426-100773214361563/.source.fact _original_basename=.p_o1fxts follow=False checksum=03aee63dcf9b49b0ac4473b2f1a1b5d3783aa639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:10:32 localhost python3.9[120316]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:10:34 localhost python3.9[120414]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 04:10:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13584 DF PROTO=TCP SPT=49676 DPT=9100 SEQ=1172121438 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B687C0F0000000001030307) Nov 23 04:10:35 localhost python3.9[120468]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:10:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17515 DF PROTO=TCP SPT=48188 DPT=9100 SEQ=1637596242 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B68874F0000000001030307) Nov 23 04:10:38 localhost systemd[1]: Reloading. Nov 23 04:10:38 localhost systemd-rc-local-generator[120501]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:10:38 localhost systemd-sysv-generator[120506]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:10:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:10:38 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 23 04:10:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53259 DF PROTO=TCP SPT=53230 DPT=9105 SEQ=3446106992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B68910F0000000001030307) Nov 23 04:10:40 localhost python3.9[120608]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:10:42 localhost python3.9[120847]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False Nov 23 04:10:43 localhost python3.9[120939]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None Nov 23 04:10:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53260 DF PROTO=TCP SPT=53230 DPT=9105 SEQ=3446106992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B68A0CF0000000001030307) Nov 23 04:10:45 localhost python3.9[121032]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:10:46 localhost python3.9[121124]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None Nov 23 04:10:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44176 DF PROTO=TCP SPT=55962 DPT=9882 SEQ=1186034865 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B68AD830000000001030307) Nov 23 04:10:48 localhost python3.9[121216]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:10:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20969 DF PROTO=TCP SPT=59106 DPT=9882 SEQ=3243868483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B68B4100000000001030307) Nov 23 04:10:48 localhost python3.9[121308]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:10:49 localhost python3.9[121381]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889048.418845-750-66187672134894/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:10:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24806 DF PROTO=TCP SPT=55248 DPT=9101 SEQ=1326538951 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B68C20F0000000001030307) Nov 23 04:10:54 localhost python3.9[121473]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:10:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4319 DF PROTO=TCP SPT=37420 DPT=9102 SEQ=1085738900 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B68D20F0000000001030307) Nov 23 04:10:56 localhost python3.9[121567]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None Nov 23 04:10:57 localhost python3.9[121660]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None Nov 23 04:10:58 localhost python3.9[121753]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Nov 23 04:10:59 localhost python3.9[121851]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None Nov 23 04:11:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32397 DF PROTO=TCP SPT=56194 DPT=9100 SEQ=3142364982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B68E0AE0000000001030307) Nov 23 04:11:00 localhost python3.9[122002]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:11:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32398 DF PROTO=TCP SPT=56194 DPT=9100 SEQ=3142364982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B68E4D00000000001030307) Nov 23 04:11:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16992 DF PROTO=TCP SPT=43802 DPT=9100 SEQ=4281792712 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B68F00F0000000001030307) Nov 23 04:11:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32400 DF PROTO=TCP SPT=56194 DPT=9100 SEQ=3142364982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B68FC8F0000000001030307) Nov 23 04:11:07 localhost python3.9[122149]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:11:08 localhost python3.9[122256]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:11:09 localhost python3.9[122329]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889068.0773659-1023-51610509977390/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 23 04:11:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60487 DF PROTO=TCP SPT=34918 DPT=9105 SEQ=2105924459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B69064F0000000001030307) Nov 23 04:11:10 localhost python3.9[122422]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:11:10 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 04:11:10 localhost systemd[1]: Stopped Load Kernel Modules. Nov 23 04:11:10 localhost systemd[1]: Stopping Load Kernel Modules... Nov 23 04:11:10 localhost systemd[1]: Starting Load Kernel Modules... Nov 23 04:11:10 localhost systemd-modules-load[122426]: Module 'msr' is built in Nov 23 04:11:10 localhost systemd[1]: Finished Load Kernel Modules. Nov 23 04:11:11 localhost python3.9[122519]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:11:12 localhost python3.9[122592]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889071.1555865-1092-264158602671415/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 23 04:11:13 localhost python3.9[122684]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:11:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60488 DF PROTO=TCP SPT=34918 DPT=9105 SEQ=2105924459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6916100000000001030307) Nov 23 04:11:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56663 DF PROTO=TCP SPT=43120 DPT=9882 SEQ=1711280604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6922B30000000001030307) Nov 23 04:11:17 localhost python3.9[122776]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:11:18 localhost python3.9[122868]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Nov 23 04:11:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44181 DF PROTO=TCP SPT=55962 DPT=9882 SEQ=1186034865 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B692A0F0000000001030307) Nov 23 04:11:19 localhost python3.9[122958]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:11:20 localhost python3.9[123050]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:11:20 localhost sshd[123051]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:11:20 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Nov 23 04:11:20 localhost systemd[1]: tuned.service: Deactivated successfully. Nov 23 04:11:20 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Nov 23 04:11:20 localhost systemd[1]: tuned.service: Consumed 1.897s CPU time, no IO. Nov 23 04:11:20 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Nov 23 04:11:21 localhost systemd[1]: Started Dynamic System Tuning Daemon. Nov 23 04:11:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60489 DF PROTO=TCP SPT=34918 DPT=9105 SEQ=2105924459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B69360F0000000001030307) Nov 23 04:11:23 localhost python3.9[123155]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Nov 23 04:11:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4469 DF PROTO=TCP SPT=39344 DPT=9102 SEQ=1567926434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B69474F0000000001030307) Nov 23 04:11:27 localhost python3.9[123247]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:11:27 localhost systemd[1]: Reloading. Nov 23 04:11:27 localhost systemd-sysv-generator[123280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:11:27 localhost systemd-rc-local-generator[123276]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:11:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:11:28 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:15:63:3a MACPROTO=0800 SRC=208.81.1.244 DST=38.102.83.248 LEN=308 TOS=0x08 PREC=0x20 TTL=54 ID=50024 DF PROTO=TCP SPT=443 DPT=54896 SEQ=352521275 ACK=3555618399 WINDOW=131 RES=0x00 ACK URGP=0 OPT (0101080AF347ADF585CEFB5A) Nov 23 04:11:28 localhost python3.9[123377]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:11:29 localhost systemd[1]: Reloading. Nov 23 04:11:29 localhost systemd-sysv-generator[123406]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:11:29 localhost systemd-rc-local-generator[123403]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:11:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:11:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22914 DF PROTO=TCP SPT=36094 DPT=9100 SEQ=1153092165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6959CF0000000001030307) Nov 23 04:11:32 localhost python3.9[123507]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:11:32 localhost python3.9[123600]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:11:32 localhost kernel: Adding 1048572k swap on /swap. Priority:-2 extents:1 across:1048572k FS Nov 23 04:11:33 localhost python3.9[123693]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:11:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17518 DF PROTO=TCP SPT=48188 DPT=9100 SEQ=1637596242 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6966100000000001030307) Nov 23 04:11:35 localhost python3.9[123792]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:11:36 localhost python3.9[123885]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:11:36 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 04:11:36 localhost systemd[1]: Stopped Apply Kernel Variables. Nov 23 04:11:36 localhost systemd[1]: Stopping Apply Kernel Variables... Nov 23 04:11:36 localhost systemd[1]: Starting Apply Kernel Variables... Nov 23 04:11:36 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 04:11:36 localhost systemd[1]: Finished Apply Kernel Variables. Nov 23 04:11:36 localhost systemd[1]: session-38.scope: Deactivated successfully. Nov 23 04:11:36 localhost systemd[1]: session-38.scope: Consumed 1min 55.448s CPU time. Nov 23 04:11:36 localhost systemd-logind[760]: Session 38 logged out. Waiting for processes to exit. Nov 23 04:11:36 localhost systemd-logind[760]: Removed session 38. Nov 23 04:11:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22916 DF PROTO=TCP SPT=36094 DPT=9100 SEQ=1153092165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B69718F0000000001030307) Nov 23 04:11:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9640 DF PROTO=TCP SPT=40000 DPT=9105 SEQ=1651629259 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B697B900000000001030307) Nov 23 04:11:42 localhost sshd[123905]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:11:42 localhost systemd-logind[760]: New session 39 of user zuul. Nov 23 04:11:42 localhost systemd[1]: Started Session 39 of User zuul. Nov 23 04:11:43 localhost python3.9[123998]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:11:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9641 DF PROTO=TCP SPT=40000 DPT=9105 SEQ=1651629259 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B698B4F0000000001030307) Nov 23 04:11:44 localhost python3.9[124092]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:11:45 localhost sshd[124111]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:11:46 localhost python3.9[124190]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:11:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48014 DF PROTO=TCP SPT=53932 DPT=9882 SEQ=3878124060 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6997E20000000001030307) Nov 23 04:11:47 localhost python3.9[124281]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:11:48 localhost python3.9[124377]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 04:11:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51028 DF PROTO=TCP SPT=56470 DPT=9102 SEQ=460848418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B69A0C40000000001030307) Nov 23 04:11:49 localhost python3.9[124431]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:11:51 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:15:63:3a MACPROTO=0800 SRC=208.81.1.244 DST=38.102.83.248 LEN=116 TOS=0x08 PREC=0x20 TTL=54 ID=50026 DF PROTO=TCP SPT=443 DPT=54896 SEQ=352521275 ACK=3555618399 WINDOW=131 RES=0x00 ACK URGP=0 OPT (0101080AF34809F585CEFB5A) Nov 23 04:11:53 localhost python3.9[124525]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 04:11:54 localhost sshd[124640]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:11:54 localhost python3.9[124673]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:11:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55320 DF PROTO=TCP SPT=54742 DPT=9101 SEQ=1637105509 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B69B60F0000000001030307) Nov 23 04:11:55 localhost python3.9[124765]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:11:56 localhost python3.9[124870]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:11:56 localhost python3.9[124919]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:11:57 localhost python3.9[125011]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:11:57 localhost python3.9[125084]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889116.7797678-323-66856890746942/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 23 04:11:58 localhost python3.9[125176]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Nov 23 04:11:59 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:15:63:3a MACPROTO=0800 SRC=208.81.1.244 DST=38.102.83.248 LEN=564 TOS=0x08 PREC=0x20 TTL=54 ID=65256 DF PROTO=TCP SPT=443 DPT=42030 SEQ=729339214 ACK=4112260683 WINDOW=131 RES=0x00 ACK URGP=0 OPT (0101080AF34826F585CF71E2) Nov 23 04:11:59 localhost python3.9[125268]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Nov 23 04:12:00 localhost python3.9[125360]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Nov 23 04:12:00 localhost python3.9[125452]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Nov 23 04:12:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61885 DF PROTO=TCP SPT=47894 DPT=9100 SEQ=425491319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B69CF0F0000000001030307) Nov 23 04:12:01 localhost python3.9[125542]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:12:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 04:12:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 5370 writes, 23K keys, 5370 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5370 writes, 735 syncs, 7.31 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 04:12:02 localhost python3.9[125636]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 23 04:12:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32403 DF PROTO=TCP SPT=56194 DPT=9100 SEQ=3142364982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B69DA100000000001030307) Nov 23 04:12:06 localhost python3.9[125761]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 23 04:12:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 04:12:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 5205 writes, 23K keys, 5205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5205 writes, 665 syncs, 7.83 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 04:12:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61887 DF PROTO=TCP SPT=47894 DPT=9100 SEQ=425491319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B69E6D00000000001030307) Nov 23 04:12:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18000 DF PROTO=TCP SPT=51994 DPT=9105 SEQ=3574368270 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B69F0900000000001030307) Nov 23 04:12:10 localhost python3.9[125901]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 23 04:12:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18001 DF PROTO=TCP SPT=51994 DPT=9105 SEQ=3574368270 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6A004F0000000001030307) Nov 23 04:12:14 localhost python3.9[126001]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 23 04:12:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61888 DF PROTO=TCP SPT=47894 DPT=9100 SEQ=425491319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6A08100000000001030307) Nov 23 04:12:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48019 DF PROTO=TCP SPT=53932 DPT=9882 SEQ=3878124060 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6A140F0000000001030307) Nov 23 04:12:19 localhost python3.9[126095]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 23 04:12:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18002 DF PROTO=TCP SPT=51994 DPT=9105 SEQ=3574368270 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6A200F0000000001030307) Nov 23 04:12:23 localhost python3.9[126189]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 23 04:12:26 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:15:63:3a MACPROTO=0800 SRC=208.81.1.244 DST=38.102.83.248 LEN=116 TOS=0x08 PREC=0x20 TTL=54 ID=65259 DF PROTO=TCP SPT=443 DPT=42030 SEQ=729339214 ACK=4112260683 WINDOW=131 RES=0x00 ACK URGP=0 OPT (0101080AF3488FF585CF71E2) Nov 23 04:12:27 localhost python3.9[126283]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 23 04:12:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61534 DF PROTO=TCP SPT=43362 DPT=9100 SEQ=914255514 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6A403E0000000001030307) Nov 23 04:12:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61535 DF PROTO=TCP SPT=43362 DPT=9100 SEQ=914255514 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6A444F0000000001030307) Nov 23 04:12:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22919 DF PROTO=TCP SPT=36094 DPT=9100 SEQ=1153092165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6A500F0000000001030307) Nov 23 04:12:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61537 DF PROTO=TCP SPT=43362 DPT=9100 SEQ=914255514 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6A5C100000000001030307) Nov 23 04:12:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63313 DF PROTO=TCP SPT=55108 DPT=9105 SEQ=132265777 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6A65CF0000000001030307) Nov 23 04:12:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63314 DF PROTO=TCP SPT=55108 DPT=9105 SEQ=132265777 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6A758F0000000001030307) Nov 23 04:12:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45099 DF PROTO=TCP SPT=37564 DPT=9882 SEQ=3527275807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6A82430000000001030307) Nov 23 04:12:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24941 DF PROTO=TCP SPT=54528 DPT=9882 SEQ=3595997439 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6A8A0F0000000001030307) Nov 23 04:12:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63315 DF PROTO=TCP SPT=55108 DPT=9105 SEQ=132265777 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6A960F0000000001030307) Nov 23 04:12:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11314 DF PROTO=TCP SPT=33830 DPT=9102 SEQ=4262649646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6AA6D00000000001030307) Nov 23 04:13:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12823 DF PROTO=TCP SPT=53996 DPT=9100 SEQ=1028182171 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6AB56D0000000001030307) Nov 23 04:13:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12824 DF PROTO=TCP SPT=53996 DPT=9100 SEQ=1028182171 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6AB98F0000000001030307) Nov 23 04:13:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61890 DF PROTO=TCP SPT=47894 DPT=9100 SEQ=425491319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6AC6100000000001030307) Nov 23 04:13:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12826 DF PROTO=TCP SPT=53996 DPT=9100 SEQ=1028182171 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6AD14F0000000001030307) Nov 23 04:13:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56753 DF PROTO=TCP SPT=37340 DPT=9105 SEQ=1191020289 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6ADB0F0000000001030307) Nov 23 04:13:09 localhost python3.9[126467]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:13:10 localhost python3.9[126620]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:13:11 localhost python3.9[126723]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1763889190.1904871-722-14825999072953/.source.json _original_basename=.k83va6pc follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:13:11 localhost podman[126779]: Nov 23 04:13:11 localhost podman[126779]: 2025-11-23 09:13:11.392433081 +0000 UTC m=+0.074539308 container create 9e61409f071ed94acbcffdd2240eb90b35d185022e270467c6baa9d458a36b24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_turing, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, release=553, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , architecture=x86_64, RELEASE=main, GIT_BRANCH=main, ceph=True, vcs-type=git) Nov 23 04:13:11 localhost systemd[1]: Started libpod-conmon-9e61409f071ed94acbcffdd2240eb90b35d185022e270467c6baa9d458a36b24.scope. Nov 23 04:13:11 localhost podman[126779]: 2025-11-23 09:13:11.362217874 +0000 UTC m=+0.044324131 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:13:11 localhost systemd[1]: Started libcrun container. Nov 23 04:13:11 localhost podman[126779]: 2025-11-23 09:13:11.480599994 +0000 UTC m=+0.162706231 container init 9e61409f071ed94acbcffdd2240eb90b35d185022e270467c6baa9d458a36b24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_turing, com.redhat.component=rhceph-container, architecture=x86_64, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, maintainer=Guillaume Abrioux , RELEASE=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Nov 23 04:13:11 localhost systemd[1]: tmp-crun.ET2zkv.mount: Deactivated successfully. Nov 23 04:13:11 localhost podman[126779]: 2025-11-23 09:13:11.49447584 +0000 UTC m=+0.176582057 container start 9e61409f071ed94acbcffdd2240eb90b35d185022e270467c6baa9d458a36b24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_turing, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_CLEAN=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, name=rhceph, version=7, ceph=True, build-date=2025-09-24T08:57:55, architecture=x86_64, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 04:13:11 localhost podman[126779]: 2025-11-23 09:13:11.494858211 +0000 UTC m=+0.176964468 container attach 9e61409f071ed94acbcffdd2240eb90b35d185022e270467c6baa9d458a36b24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_turing, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=553, vcs-type=git, ceph=True, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, name=rhceph) Nov 23 04:13:11 localhost serene_turing[126795]: 167 167 Nov 23 04:13:11 localhost systemd[1]: libpod-9e61409f071ed94acbcffdd2240eb90b35d185022e270467c6baa9d458a36b24.scope: Deactivated successfully. Nov 23 04:13:11 localhost podman[126779]: 2025-11-23 09:13:11.49936588 +0000 UTC m=+0.181472137 container died 9e61409f071ed94acbcffdd2240eb90b35d185022e270467c6baa9d458a36b24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_turing, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=553, RELEASE=main, distribution-scope=public, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7) Nov 23 04:13:11 localhost podman[126800]: 2025-11-23 09:13:11.598132789 +0000 UTC m=+0.089293870 container remove 9e61409f071ed94acbcffdd2240eb90b35d185022e270467c6baa9d458a36b24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_turing, io.buildah.version=1.33.12, name=rhceph, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2025-09-24T08:57:55, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, release=553, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.expose-services=, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:13:11 localhost systemd[1]: libpod-conmon-9e61409f071ed94acbcffdd2240eb90b35d185022e270467c6baa9d458a36b24.scope: Deactivated successfully. Nov 23 04:13:11 localhost podman[126825]: Nov 23 04:13:11 localhost podman[126825]: 2025-11-23 09:13:11.827586327 +0000 UTC m=+0.079801619 container create d36b8f9c2055f65dfc1ae019720eb2122898a5d30f9a19fb34748deeab7916b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_spence, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, name=rhceph, io.openshift.expose-services=, distribution-scope=public, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph) Nov 23 04:13:11 localhost systemd[1]: Started libpod-conmon-d36b8f9c2055f65dfc1ae019720eb2122898a5d30f9a19fb34748deeab7916b6.scope. Nov 23 04:13:11 localhost systemd[1]: Started libcrun container. Nov 23 04:13:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acfdf3f23889270abc2fafba81f2857e89c64705de0d34796883446cd4075be6/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 23 04:13:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acfdf3f23889270abc2fafba81f2857e89c64705de0d34796883446cd4075be6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 04:13:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acfdf3f23889270abc2fafba81f2857e89c64705de0d34796883446cd4075be6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 04:13:11 localhost podman[126825]: 2025-11-23 09:13:11.797796483 +0000 UTC m=+0.050011795 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:13:11 localhost podman[126825]: 2025-11-23 09:13:11.898665226 +0000 UTC m=+0.150880508 container init d36b8f9c2055f65dfc1ae019720eb2122898a5d30f9a19fb34748deeab7916b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_spence, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, version=7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux ) Nov 23 04:13:11 localhost podman[126825]: 2025-11-23 09:13:11.908036874 +0000 UTC m=+0.160252156 container start d36b8f9c2055f65dfc1ae019720eb2122898a5d30f9a19fb34748deeab7916b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_spence, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, build-date=2025-09-24T08:57:55, release=553, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, ceph=True, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 04:13:11 localhost podman[126825]: 2025-11-23 09:13:11.908255801 +0000 UTC m=+0.160471133 container attach d36b8f9c2055f65dfc1ae019720eb2122898a5d30f9a19fb34748deeab7916b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_spence, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, com.redhat.component=rhceph-container, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 04:13:12 localhost systemd[1]: var-lib-containers-storage-overlay-0222ef7946da0879b2a14bd8967b5059c7f18f46aa076114ce0a77d7f6be7885-merged.mount: Deactivated successfully. Nov 23 04:13:12 localhost charming_spence[126851]: [ Nov 23 04:13:12 localhost charming_spence[126851]: { Nov 23 04:13:12 localhost charming_spence[126851]: "available": false, Nov 23 04:13:12 localhost charming_spence[126851]: "ceph_device": false, Nov 23 04:13:12 localhost charming_spence[126851]: "device_id": "QEMU_DVD-ROM_QM00001", Nov 23 04:13:12 localhost charming_spence[126851]: "lsm_data": {}, Nov 23 04:13:12 localhost charming_spence[126851]: "lvs": [], Nov 23 04:13:12 localhost charming_spence[126851]: "path": "/dev/sr0", Nov 23 04:13:12 localhost charming_spence[126851]: "rejected_reasons": [ Nov 23 04:13:12 localhost charming_spence[126851]: "Has a FileSystem", Nov 23 04:13:12 localhost charming_spence[126851]: "Insufficient space (<5GB)" Nov 23 04:13:12 localhost charming_spence[126851]: ], Nov 23 04:13:12 localhost charming_spence[126851]: "sys_api": { Nov 23 04:13:12 localhost charming_spence[126851]: "actuators": null, Nov 23 04:13:12 localhost charming_spence[126851]: "device_nodes": "sr0", Nov 23 04:13:12 localhost charming_spence[126851]: "human_readable_size": "482.00 KB", Nov 23 04:13:12 localhost charming_spence[126851]: "id_bus": "ata", Nov 23 04:13:12 localhost charming_spence[126851]: "model": "QEMU DVD-ROM", Nov 23 04:13:12 localhost charming_spence[126851]: "nr_requests": "2", Nov 23 04:13:12 localhost charming_spence[126851]: "partitions": {}, Nov 23 04:13:12 localhost charming_spence[126851]: "path": "/dev/sr0", Nov 23 04:13:12 localhost charming_spence[126851]: "removable": "1", Nov 23 04:13:12 localhost charming_spence[126851]: "rev": "2.5+", Nov 23 04:13:12 localhost charming_spence[126851]: "ro": "0", Nov 23 04:13:12 localhost charming_spence[126851]: "rotational": "1", Nov 23 04:13:12 localhost charming_spence[126851]: "sas_address": "", Nov 23 04:13:12 localhost charming_spence[126851]: "sas_device_handle": "", Nov 23 04:13:12 localhost charming_spence[126851]: "scheduler_mode": "mq-deadline", Nov 23 04:13:12 localhost charming_spence[126851]: "sectors": 0, Nov 23 04:13:12 localhost charming_spence[126851]: "sectorsize": "2048", Nov 23 04:13:12 localhost charming_spence[126851]: "size": 493568.0, Nov 23 04:13:12 localhost charming_spence[126851]: "support_discard": "0", Nov 23 04:13:12 localhost charming_spence[126851]: "type": "disk", Nov 23 04:13:12 localhost charming_spence[126851]: "vendor": "QEMU" Nov 23 04:13:12 localhost charming_spence[126851]: } Nov 23 04:13:12 localhost charming_spence[126851]: } Nov 23 04:13:12 localhost charming_spence[126851]: ] Nov 23 04:13:12 localhost systemd[1]: libpod-d36b8f9c2055f65dfc1ae019720eb2122898a5d30f9a19fb34748deeab7916b6.scope: Deactivated successfully. Nov 23 04:13:12 localhost podman[126825]: 2025-11-23 09:13:12.724106464 +0000 UTC m=+0.976321746 container died d36b8f9c2055f65dfc1ae019720eb2122898a5d30f9a19fb34748deeab7916b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_spence, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, distribution-scope=public, architecture=x86_64, name=rhceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_BRANCH=main, release=553, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7) Nov 23 04:13:12 localhost systemd[1]: var-lib-containers-storage-overlay-acfdf3f23889270abc2fafba81f2857e89c64705de0d34796883446cd4075be6-merged.mount: Deactivated successfully. Nov 23 04:13:12 localhost podman[128278]: 2025-11-23 09:13:12.805723057 +0000 UTC m=+0.074522946 container remove d36b8f9c2055f65dfc1ae019720eb2122898a5d30f9a19fb34748deeab7916b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_spence, maintainer=Guillaume Abrioux , GIT_CLEAN=True, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, RELEASE=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.buildah.version=1.33.12, vendor=Red Hat, Inc., release=553, version=7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main) Nov 23 04:13:12 localhost systemd[1]: libpod-conmon-d36b8f9c2055f65dfc1ae019720eb2122898a5d30f9a19fb34748deeab7916b6.scope: Deactivated successfully. Nov 23 04:13:13 localhost python3.9[128356]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Nov 23 04:13:13 localhost systemd-journald[47422]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 77.5 (258 of 333 items), suggesting rotation. Nov 23 04:13:13 localhost systemd-journald[47422]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 23 04:13:13 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 04:13:13 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 04:13:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56754 DF PROTO=TCP SPT=37340 DPT=9105 SEQ=1191020289 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6AEACF0000000001030307) Nov 23 04:13:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=175 DF PROTO=TCP SPT=50328 DPT=9882 SEQ=4264926271 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6AF7720000000001030307) Nov 23 04:13:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45104 DF PROTO=TCP SPT=37564 DPT=9882 SEQ=3527275807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6AFE100000000001030307) Nov 23 04:13:19 localhost podman[128385]: 2025-11-23 09:13:13.606623071 +0000 UTC m=+0.045683271 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Nov 23 04:13:20 localhost python3.9[128585]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Nov 23 04:13:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12037 DF PROTO=TCP SPT=43298 DPT=9101 SEQ=3201413331 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6B0C0F0000000001030307) Nov 23 04:13:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12699 DF PROTO=TCP SPT=33936 DPT=9102 SEQ=3689370183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6B1C0F0000000001030307) Nov 23 04:13:28 localhost podman[128598]: 2025-11-23 09:13:21.069857236 +0000 UTC m=+0.042779143 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Nov 23 04:13:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36931 DF PROTO=TCP SPT=56424 DPT=9100 SEQ=2587288456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6B2A9E0000000001030307) Nov 23 04:13:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36932 DF PROTO=TCP SPT=56424 DPT=9100 SEQ=2587288456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6B2E8F0000000001030307) Nov 23 04:13:33 localhost python3.9[128798]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Nov 23 04:13:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61540 DF PROTO=TCP SPT=43362 DPT=9100 SEQ=914255514 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6B3A0F0000000001030307) Nov 23 04:13:34 localhost podman[128811]: 2025-11-23 09:13:33.146325301 +0000 UTC m=+0.045163836 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Nov 23 04:13:35 localhost python3.9[128976]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Nov 23 04:13:36 localhost podman[128988]: 2025-11-23 09:13:35.972634517 +0000 UTC m=+0.046517899 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 04:13:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36934 DF PROTO=TCP SPT=56424 DPT=9100 SEQ=2587288456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6B464F0000000001030307) Nov 23 04:13:38 localhost python3.9[129150]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Nov 23 04:13:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51947 DF PROTO=TCP SPT=47810 DPT=9105 SEQ=338844366 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6B50AC0000000001030307) Nov 23 04:13:41 localhost podman[129162]: 2025-11-23 09:13:38.383936634 +0000 UTC m=+0.048114908 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Nov 23 04:13:42 localhost python3.9[129338]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Nov 23 04:13:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51948 DF PROTO=TCP SPT=47810 DPT=9105 SEQ=338844366 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6B604F0000000001030307) Nov 23 04:13:44 localhost podman[129350]: 2025-11-23 09:13:42.927623184 +0000 UTC m=+0.057960309 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Nov 23 04:13:45 localhost systemd[1]: session-39.scope: Deactivated successfully. Nov 23 04:13:45 localhost systemd[1]: session-39.scope: Consumed 1min 32.056s CPU time. Nov 23 04:13:45 localhost systemd-logind[760]: Session 39 logged out. Waiting for processes to exit. Nov 23 04:13:45 localhost systemd-logind[760]: Removed session 39. Nov 23 04:13:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53149 DF PROTO=TCP SPT=46416 DPT=9882 SEQ=3932319884 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6B6CA20000000001030307) Nov 23 04:13:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=180 DF PROTO=TCP SPT=50328 DPT=9882 SEQ=4264926271 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6B740F0000000001030307) Nov 23 04:13:51 localhost sshd[129543]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:13:51 localhost systemd-logind[760]: New session 40 of user zuul. Nov 23 04:13:51 localhost systemd[1]: Started Session 40 of User zuul. Nov 23 04:13:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51949 DF PROTO=TCP SPT=47810 DPT=9105 SEQ=338844366 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6B800F0000000001030307) Nov 23 04:13:54 localhost python3.9[129663]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:13:56 localhost python3.9[129812]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None Nov 23 04:13:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25874 DF PROTO=TCP SPT=38650 DPT=9102 SEQ=1777301203 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6B914F0000000001030307) Nov 23 04:13:57 localhost python3.9[129905]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 04:13:58 localhost python3.9[129959]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 23 04:14:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8940 DF PROTO=TCP SPT=36144 DPT=9100 SEQ=3306513378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6B9FCE0000000001030307) Nov 23 04:14:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8941 DF PROTO=TCP SPT=36144 DPT=9100 SEQ=3306513378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6BA3CF0000000001030307) Nov 23 04:14:03 localhost python3.9[130053]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:14:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12829 DF PROTO=TCP SPT=53996 DPT=9100 SEQ=1028182171 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6BB00F0000000001030307) Nov 23 04:14:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8943 DF PROTO=TCP SPT=36144 DPT=9100 SEQ=3306513378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6BBB8F0000000001030307) Nov 23 04:14:07 localhost python3.9[130147]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 23 04:14:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2851 DF PROTO=TCP SPT=42316 DPT=9105 SEQ=2407664597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6BC54F0000000001030307) Nov 23 04:14:10 localhost python3.9[130240]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:14:11 localhost python3.9[130332]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None Nov 23 04:14:12 localhost kernel: SELinux: Converting 2743 SID table entries... Nov 23 04:14:12 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 04:14:12 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 04:14:12 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 04:14:12 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 04:14:12 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 04:14:12 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 04:14:12 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 04:14:13 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=18 res=1 Nov 23 04:14:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2852 DF PROTO=TCP SPT=42316 DPT=9105 SEQ=2407664597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6BD50F0000000001030307) Nov 23 04:14:14 localhost python3.9[130489]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:14:15 localhost python3.9[130587]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:14:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15804 DF PROTO=TCP SPT=60740 DPT=9882 SEQ=349694956 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6BE1D30000000001030307) Nov 23 04:14:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12421 DF PROTO=TCP SPT=51088 DPT=9102 SEQ=2378379818 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6BEAB40000000001030307) Nov 23 04:14:20 localhost python3.9[130696]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:14:22 localhost python3.9[130941]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Nov 23 04:14:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=762 DF PROTO=TCP SPT=39156 DPT=9101 SEQ=1031545388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6BF60F0000000001030307) Nov 23 04:14:22 localhost python3.9[131031]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:14:23 localhost python3.9[131125]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:14:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5530 DF PROTO=TCP SPT=50602 DPT=9101 SEQ=104511506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6C000F0000000001030307) Nov 23 04:14:27 localhost python3.9[131219]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:14:27 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:15:63:3a MACPROTO=0800 SRC=208.81.1.244 DST=38.102.83.248 LEN=1500 TOS=0x08 PREC=0x20 TTL=54 ID=26446 DF PROTO=TCP SPT=443 DPT=42016 SEQ=4082359749 ACK=1760133472 WINDOW=131 RES=0x00 ACK URGP=0 OPT (0101080AF34A6B5585CF71E4) Nov 23 04:14:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24713 DF PROTO=TCP SPT=42794 DPT=9100 SEQ=3244090945 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6C190F0000000001030307) Nov 23 04:14:31 localhost python3.9[131313]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Nov 23 04:14:32 localhost systemd[1]: Reloading. Nov 23 04:14:32 localhost systemd-rc-local-generator[131340]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:14:32 localhost systemd-sysv-generator[131346]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:14:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:14:33 localhost python3.9[131445]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:14:33 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:15:63:3a MACPROTO=0800 SRC=208.81.1.244 DST=38.102.83.248 LEN=52 TOS=0x08 PREC=0x20 TTL=54 ID=60305 DF PROTO=TCP SPT=443 DPT=60992 SEQ=1059007036 ACK=2829486126 WINDOW=131 RES=0x00 ACK FIN URGP=0 OPT (0101080AF34A81F585D27D47) Nov 23 04:14:34 localhost python3.9[131537]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:14:35 localhost python3.9[131631]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:14:35 localhost python3.9[131723]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:14:36 localhost python3.9[131815]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:14:37 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:15:63:3a MACPROTO=0800 SRC=208.81.1.244 DST=38.102.83.248 LEN=564 TOS=0x08 PREC=0x20 TTL=54 ID=63102 DF PROTO=TCP SPT=443 DPT=42024 SEQ=3802836379 ACK=1472355420 WINDOW=131 RES=0x00 ACK URGP=0 OPT (0101080AF34A8F3585CF71E5) Nov 23 04:14:37 localhost python3.9[131888]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889276.4846613-563-243383620341436/.source _original_basename=.5687oseq follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:14:38 localhost python3.9[131980]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:14:39 localhost python3.9[132072]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={} Nov 23 04:14:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47417 DF PROTO=TCP SPT=35094 DPT=9105 SEQ=3917984152 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6C3A8F0000000001030307) Nov 23 04:14:40 localhost python3.9[132164]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:14:41 localhost python3.9[132256]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:14:41 localhost python3.9[132329]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889280.5998292-689-88421328230333/.source.yaml _original_basename=._9koe2qj follow=False checksum=0cadac3cfc033a4e07cfac59b43f6459e787700a force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:14:42 localhost python3.9[132421]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml Nov 23 04:14:43 localhost ansible-async_wrapper.py[132526]: Invoked with j925158641865 300 /home/zuul/.ansible/tmp/ansible-tmp-1763889282.8528388-761-22764766538396/AnsiballZ_edpm_os_net_config.py _ Nov 23 04:14:43 localhost ansible-async_wrapper.py[132529]: Starting module and watcher Nov 23 04:14:43 localhost ansible-async_wrapper.py[132529]: Start watching 132530 (300) Nov 23 04:14:43 localhost ansible-async_wrapper.py[132530]: Start module (132530) Nov 23 04:14:43 localhost ansible-async_wrapper.py[132526]: Return async_wrapper task started. Nov 23 04:14:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47418 DF PROTO=TCP SPT=35094 DPT=9105 SEQ=3917984152 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6C4A4F0000000001030307) Nov 23 04:14:43 localhost python3.9[132531]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=False Nov 23 04:14:44 localhost ansible-async_wrapper.py[132530]: Module complete (132530) Nov 23 04:14:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24716 DF PROTO=TCP SPT=42794 DPT=9100 SEQ=3244090945 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6C520F0000000001030307) Nov 23 04:14:47 localhost python3.9[132623]: ansible-ansible.legacy.async_status Invoked with jid=j925158641865.132526 mode=status _async_dir=/root/.ansible_async Nov 23 04:14:47 localhost python3.9[132682]: ansible-ansible.legacy.async_status Invoked with jid=j925158641865.132526 mode=cleanup _async_dir=/root/.ansible_async Nov 23 04:14:48 localhost python3.9[132774]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:14:48 localhost ansible-async_wrapper.py[132529]: Done in kid B. Nov 23 04:14:48 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:15:63:3a MACPROTO=0800 SRC=208.81.1.244 DST=38.102.83.248 LEN=180 TOS=0x08 PREC=0x20 TTL=54 ID=52893 DF PROTO=TCP SPT=443 DPT=60990 SEQ=1088543458 ACK=1299054031 WINDOW=131 RES=0x00 ACK URGP=0 OPT (0101080AF34ABCF585D26EB3) Nov 23 04:14:49 localhost python3.9[132847]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889288.1728585-827-38438671265489/.source.returncode _original_basename=.k07qpzc6 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:14:49 localhost python3.9[132939]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:14:50 localhost python3.9[133012]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889289.4220116-875-190021511150398/.source.cfg _original_basename=.4168rlq0 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:14:51 localhost python3.9[133104]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:14:51 localhost systemd[1]: Reloading Network Manager... Nov 23 04:14:51 localhost NetworkManager[5966]: [1763889291.4374] audit: op="reload" arg="0" pid=133108 uid=0 result="success" Nov 23 04:14:51 localhost NetworkManager[5966]: [1763889291.4389] config: signal: SIGHUP (no changes from disk) Nov 23 04:14:51 localhost systemd[1]: Reloaded Network Manager. Nov 23 04:14:51 localhost systemd-logind[760]: Session 40 logged out. Waiting for processes to exit. Nov 23 04:14:51 localhost systemd[1]: session-40.scope: Deactivated successfully. Nov 23 04:14:51 localhost systemd[1]: session-40.scope: Consumed 36.839s CPU time. Nov 23 04:14:51 localhost systemd-logind[760]: Removed session 40. Nov 23 04:14:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47419 DF PROTO=TCP SPT=35094 DPT=9105 SEQ=3917984152 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6C6A0F0000000001030307) Nov 23 04:14:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47793 DF PROTO=TCP SPT=60622 DPT=9102 SEQ=3089428215 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6C7B8F0000000001030307) Nov 23 04:14:57 localhost sshd[133123]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:14:57 localhost systemd-logind[760]: New session 41 of user zuul. Nov 23 04:14:57 localhost systemd[1]: Started Session 41 of User zuul. Nov 23 04:14:58 localhost python3.9[133216]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:15:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47828 DF PROTO=TCP SPT=59568 DPT=9100 SEQ=2437336711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6C8A2E0000000001030307) Nov 23 04:15:00 localhost python3.9[133310]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 04:15:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47829 DF PROTO=TCP SPT=59568 DPT=9100 SEQ=2437336711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6C8E500000000001030307) Nov 23 04:15:03 localhost python3.9[133455]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:15:04 localhost systemd[1]: session-41.scope: Deactivated successfully. Nov 23 04:15:04 localhost systemd[1]: session-41.scope: Consumed 2.266s CPU time. Nov 23 04:15:04 localhost systemd-logind[760]: Session 41 logged out. Waiting for processes to exit. Nov 23 04:15:04 localhost systemd-logind[760]: Removed session 41. Nov 23 04:15:04 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:15:63:3a MACPROTO=0800 SRC=208.81.1.244 DST=38.102.83.248 LEN=116 TOS=0x08 PREC=0x20 TTL=54 ID=52894 DF PROTO=TCP SPT=443 DPT=60990 SEQ=1088543458 ACK=1299054031 WINDOW=131 RES=0x00 ACK URGP=0 OPT (0101080AF34AF8F985D26EB3) Nov 23 04:15:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47831 DF PROTO=TCP SPT=59568 DPT=9100 SEQ=2437336711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6CA60F0000000001030307) Nov 23 04:15:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13662 DF PROTO=TCP SPT=45880 DPT=9105 SEQ=2072075151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6CAFCF0000000001030307) Nov 23 04:15:10 localhost sshd[133472]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:15:10 localhost systemd-logind[760]: New session 42 of user zuul. Nov 23 04:15:10 localhost systemd[1]: Started Session 42 of User zuul. Nov 23 04:15:11 localhost python3.9[133565]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:15:12 localhost python3.9[133660]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:15:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13663 DF PROTO=TCP SPT=45880 DPT=9105 SEQ=2072075151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6CBF8F0000000001030307) Nov 23 04:15:14 localhost python3.9[133756]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 04:15:15 localhost python3.9[133810]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:15:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39793 DF PROTO=TCP SPT=56538 DPT=9882 SEQ=2024941223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6CCC330000000001030307) Nov 23 04:15:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19907 DF PROTO=TCP SPT=39598 DPT=9882 SEQ=810005762 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6CD4100000000001030307) Nov 23 04:15:19 localhost python3.9[134031]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 04:15:21 localhost python3.9[134178]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:15:22 localhost python3.9[134270]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:15:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13664 DF PROTO=TCP SPT=45880 DPT=9105 SEQ=2072075151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6CE00F0000000001030307) Nov 23 04:15:23 localhost python3.9[134375]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:15:23 localhost python3.9[134423]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:15:24 localhost python3.9[134515]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:15:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50242 DF PROTO=TCP SPT=35682 DPT=9101 SEQ=1889838269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6CEA0F0000000001030307) Nov 23 04:15:24 localhost python3.9[134563]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:15:25 localhost python3.9[134655]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Nov 23 04:15:26 localhost python3.9[134747]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Nov 23 04:15:27 localhost python3.9[134839]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Nov 23 04:15:28 localhost python3.9[134931]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Nov 23 04:15:28 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:15:63:3a MACPROTO=0800 SRC=208.81.1.244 DST=38.102.83.248 LEN=100 TOS=0x08 PREC=0x20 TTL=54 ID=26451 DF PROTO=TCP SPT=443 DPT=42016 SEQ=4082359749 ACK=1760133472 WINDOW=131 RES=0x00 ACK URGP=0 OPT (0101080AF34B58F585CF71E4) Nov 23 04:15:29 localhost python3.9[135023]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:15:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14914 DF PROTO=TCP SPT=35676 DPT=9100 SEQ=3701432180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6D03500000000001030307) Nov 23 04:15:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24718 DF PROTO=TCP SPT=42794 DPT=9100 SEQ=3244090945 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6D10100000000001030307) Nov 23 04:15:34 localhost python3.9[135117]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:15:35 localhost python3.9[135211]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:15:36 localhost python3.9[135303]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:15:36 localhost python3.9[135395]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:15:36 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:15:63:3a MACPROTO=0800 SRC=208.81.1.244 DST=38.102.83.248 LEN=100 TOS=0x08 PREC=0x20 TTL=54 ID=40233 DF PROTO=TCP SPT=443 DPT=60996 SEQ=1799758573 ACK=2397997271 WINDOW=131 RES=0x00 ACK URGP=0 OPT (0101080AF34B78F585D27EDB) Nov 23 04:15:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28719 DF PROTO=TCP SPT=39984 DPT=9105 SEQ=3175184306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6D25120000000001030307) Nov 23 04:15:39 localhost python3.9[135488]: ansible-service_facts Invoked Nov 23 04:15:39 localhost network[135505]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 23 04:15:39 localhost network[135506]: 'network-scripts' will be removed from distribution in near future. Nov 23 04:15:39 localhost network[135507]: It is advised to switch to 'NetworkManager' instead for network management. Nov 23 04:15:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:15:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28720 DF PROTO=TCP SPT=39984 DPT=9105 SEQ=3175184306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6D34CF0000000001030307) Nov 23 04:15:46 localhost python3.9[135829]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:15:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63260 DF PROTO=TCP SPT=58644 DPT=9882 SEQ=1764775946 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6D41630000000001030307) Nov 23 04:15:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39798 DF PROTO=TCP SPT=56538 DPT=9882 SEQ=2024941223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6D480F0000000001030307) Nov 23 04:15:51 localhost python3.9[135923]: ansible-package_facts Invoked with manager=['auto'] strategy=first Nov 23 04:15:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28721 DF PROTO=TCP SPT=39984 DPT=9105 SEQ=3175184306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6D560F0000000001030307) Nov 23 04:15:54 localhost python3.9[136015]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:15:55 localhost python3.9[136090]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889353.263596-656-135291969991551/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:15:56 localhost python3.9[136184]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:15:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60338 DF PROTO=TCP SPT=44448 DPT=9102 SEQ=2218710972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6D66100000000001030307) Nov 23 04:15:56 localhost python3.9[136259]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889355.846524-701-130508755425975/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:15:58 localhost python3.9[136353]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21977 DF PROTO=TCP SPT=34730 DPT=9100 SEQ=18653393 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6D748E0000000001030307) Nov 23 04:16:00 localhost python3.9[136447]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 04:16:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21978 DF PROTO=TCP SPT=34730 DPT=9100 SEQ=18653393 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6D788F0000000001030307) Nov 23 04:16:02 localhost python3.9[136501]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:16:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47834 DF PROTO=TCP SPT=59568 DPT=9100 SEQ=2437336711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6D84100000000001030307) Nov 23 04:16:04 localhost python3.9[136595]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 04:16:06 localhost python3.9[136649]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:16:07 localhost chronyd[25679]: chronyd exiting Nov 23 04:16:07 localhost systemd[1]: Stopping NTP client/server... Nov 23 04:16:07 localhost systemd[1]: chronyd.service: Deactivated successfully. Nov 23 04:16:07 localhost systemd[1]: Stopped NTP client/server. Nov 23 04:16:07 localhost systemd[1]: Starting NTP client/server... Nov 23 04:16:07 localhost chronyd[136657]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Nov 23 04:16:07 localhost chronyd[136657]: Frequency -30.546 +/- 0.318 ppm read from /var/lib/chrony/drift Nov 23 04:16:07 localhost chronyd[136657]: Loaded seccomp filter (level 2) Nov 23 04:16:07 localhost systemd[1]: Started NTP client/server. Nov 23 04:16:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21980 DF PROTO=TCP SPT=34730 DPT=9100 SEQ=18653393 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6D90500000000001030307) Nov 23 04:16:08 localhost systemd[1]: session-42.scope: Deactivated successfully. Nov 23 04:16:08 localhost systemd[1]: session-42.scope: Consumed 28.879s CPU time. Nov 23 04:16:08 localhost systemd-logind[760]: Session 42 logged out. Waiting for processes to exit. Nov 23 04:16:08 localhost systemd-logind[760]: Removed session 42. Nov 23 04:16:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43650 DF PROTO=TCP SPT=42836 DPT=9105 SEQ=2134785753 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6D9A0F0000000001030307) Nov 23 04:16:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43651 DF PROTO=TCP SPT=42836 DPT=9105 SEQ=2134785753 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6DA9CF0000000001030307) Nov 23 04:16:13 localhost sshd[136673]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:16:14 localhost systemd-logind[760]: New session 43 of user zuul. Nov 23 04:16:14 localhost systemd[1]: Started Session 43 of User zuul. Nov 23 04:16:15 localhost python3.9[136766]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:16:15 localhost sshd[136817]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:16:16 localhost python3.9[136864]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60583 DF PROTO=TCP SPT=55508 DPT=9882 SEQ=2238282825 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6DB6930000000001030307) Nov 23 04:16:17 localhost python3.9[136969]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:16:18 localhost python3.9[137017]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.bj7ph2s9 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63265 DF PROTO=TCP SPT=58644 DPT=9882 SEQ=1764775946 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6DBE0F0000000001030307) Nov 23 04:16:20 localhost python3.9[137185]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:16:20 localhost python3.9[137260]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889379.4931228-143-37553541270368/.source _original_basename=.ovqv2vv3 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43652 DF PROTO=TCP SPT=42836 DPT=9105 SEQ=2134785753 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6DCA0F0000000001030307) Nov 23 04:16:22 localhost python3.9[137353]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:16:22 localhost auditd[726]: Audit daemon rotating log files Nov 23 04:16:22 localhost python3.9[137445]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:16:23 localhost python3.9[137518]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889382.4097533-215-141940592679453/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 23 04:16:24 localhost python3.9[137610]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:16:24 localhost python3.9[137683]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889383.7850974-215-277883918641861/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 23 04:16:25 localhost python3.9[137775]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:26 localhost python3.9[137867]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:16:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27102 DF PROTO=TCP SPT=55978 DPT=9102 SEQ=2549246086 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6DDB4F0000000001030307) Nov 23 04:16:26 localhost python3.9[137940]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889385.5593686-326-267136643031114/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:27 localhost python3.9[138032]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:16:27 localhost python3.9[138105]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889386.829633-371-210528972949222/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:29 localhost python3.9[138197]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:16:29 localhost systemd[1]: Reloading. Nov 23 04:16:29 localhost systemd-rc-local-generator[138220]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:16:29 localhost systemd-sysv-generator[138226]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:16:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:16:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6899 DF PROTO=TCP SPT=42492 DPT=9100 SEQ=2138928383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6DE9BE0000000001030307) Nov 23 04:16:30 localhost systemd[1]: Reloading. Nov 23 04:16:30 localhost systemd-rc-local-generator[138260]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:16:30 localhost systemd-sysv-generator[138265]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:16:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:16:30 localhost systemd[1]: Starting EDPM Container Shutdown... Nov 23 04:16:30 localhost systemd[1]: Finished EDPM Container Shutdown. Nov 23 04:16:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6900 DF PROTO=TCP SPT=42492 DPT=9100 SEQ=2138928383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6DEDCF0000000001030307) Nov 23 04:16:31 localhost python3.9[138366]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:16:32 localhost python3.9[138439]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889390.9364996-440-6458362317805/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:33 localhost python3.9[138531]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:16:33 localhost python3.9[138604]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889392.986061-485-254124177812796/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14919 DF PROTO=TCP SPT=35676 DPT=9100 SEQ=3701432180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6DFA0F0000000001030307) Nov 23 04:16:34 localhost python3.9[138696]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:16:34 localhost systemd[1]: Reloading. Nov 23 04:16:34 localhost systemd-rc-local-generator[138719]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:16:34 localhost systemd-sysv-generator[138722]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:16:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:16:35 localhost systemd[1]: Starting Create netns directory... Nov 23 04:16:35 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 23 04:16:35 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 23 04:16:35 localhost systemd[1]: Finished Create netns directory. Nov 23 04:16:36 localhost python3.9[138829]: ansible-ansible.builtin.service_facts Invoked Nov 23 04:16:36 localhost network[138846]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 23 04:16:36 localhost network[138847]: 'network-scripts' will be removed from distribution in near future. Nov 23 04:16:36 localhost network[138848]: It is advised to switch to 'NetworkManager' instead for network management. Nov 23 04:16:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6902 DF PROTO=TCP SPT=42492 DPT=9100 SEQ=2138928383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6E058F0000000001030307) Nov 23 04:16:37 localhost sshd[138866]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:16:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:16:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10188 DF PROTO=TCP SPT=56340 DPT=9105 SEQ=1221546761 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6E0F4F0000000001030307) Nov 23 04:16:41 localhost python3.9[139052]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:16:42 localhost python3.9[139127]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889401.3527358-608-112396259851126/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10189 DF PROTO=TCP SPT=56340 DPT=9105 SEQ=1221546761 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6E1F0F0000000001030307) Nov 23 04:16:44 localhost python3.9[139220]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:16:44 localhost systemd[1]: Reloading OpenSSH server daemon... Nov 23 04:16:44 localhost systemd[1]: Reloaded OpenSSH server daemon. Nov 23 04:16:44 localhost sshd[119331]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:16:44 localhost python3.9[139316]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:45 localhost python3.9[139408]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:16:46 localhost python3.9[139481]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889405.1597302-701-84268030909343/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26008 DF PROTO=TCP SPT=37740 DPT=9882 SEQ=4122547606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6E2BC30000000001030307) Nov 23 04:16:47 localhost python3.9[139573]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Nov 23 04:16:47 localhost systemd[1]: Starting Time & Date Service... Nov 23 04:16:47 localhost systemd[1]: Started Time & Date Service. Nov 23 04:16:48 localhost python3.9[139669]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=774 DF PROTO=TCP SPT=36092 DPT=9102 SEQ=3140622810 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6E34A40000000001030307) Nov 23 04:16:50 localhost python3.9[139761]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:16:50 localhost python3.9[139834]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889409.6935623-806-59821960742590/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:51 localhost python3.9[139926]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:16:51 localhost python3.9[139999]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889410.9075494-851-59101097432786/.source.yaml _original_basename=.qm55znfd follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41902 DF PROTO=TCP SPT=47316 DPT=9101 SEQ=692800690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6E40100000000001030307) Nov 23 04:16:52 localhost python3.9[140091]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:16:53 localhost python3.9[140166]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889412.07435-896-178464094476697/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:53 localhost python3.9[140258]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:16:54 localhost python3.9[140351]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:16:55 localhost python3[140444]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Nov 23 04:16:56 localhost python3.9[140536]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:16:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=777 DF PROTO=TCP SPT=36092 DPT=9102 SEQ=3140622810 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6E50500000000001030307) Nov 23 04:16:56 localhost python3.9[140609]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889415.476911-1013-104131221644934/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:57 localhost python3.9[140701]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:16:57 localhost python3.9[140774]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889416.734856-1058-209642602791537/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:58 localhost python3.9[140866]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:16:58 localhost python3.9[140939]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889417.9169075-1103-138059713588983/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:16:59 localhost python3.9[141031]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:17:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61207 DF PROTO=TCP SPT=58372 DPT=9100 SEQ=351736223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6E5EEE0000000001030307) Nov 23 04:17:01 localhost python3.9[141104]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889419.136201-1148-246251275520065/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:17:01 localhost python3.9[141196]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:17:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6904 DF PROTO=TCP SPT=42492 DPT=9100 SEQ=2138928383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6E66100000000001030307) Nov 23 04:17:02 localhost python3.9[141269]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889421.3948503-1193-9183326175101/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:17:03 localhost python3.9[141361]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:17:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21983 DF PROTO=TCP SPT=34730 DPT=9100 SEQ=18653393 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6E6E0F0000000001030307) Nov 23 04:17:04 localhost python3.9[141453]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:17:05 localhost python3.9[141548]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:17:06 localhost python3.9[141641]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:17:06 localhost python3.9[141733]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:17:07 localhost python3.9[141825]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Nov 23 04:17:08 localhost python3.9[141918]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Nov 23 04:17:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10191 DF PROTO=TCP SPT=56340 DPT=9105 SEQ=1221546761 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6E800F0000000001030307) Nov 23 04:17:09 localhost systemd[1]: session-43.scope: Deactivated successfully. Nov 23 04:17:09 localhost systemd[1]: session-43.scope: Consumed 28.710s CPU time. Nov 23 04:17:09 localhost systemd-logind[760]: Session 43 logged out. Waiting for processes to exit. Nov 23 04:17:09 localhost systemd-logind[760]: Removed session 43. Nov 23 04:17:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43654 DF PROTO=TCP SPT=42836 DPT=9105 SEQ=2134785753 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6E880F0000000001030307) Nov 23 04:17:14 localhost sshd[141934]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:17:15 localhost systemd-logind[760]: New session 44 of user zuul. Nov 23 04:17:15 localhost systemd[1]: Started Session 44 of User zuul. Nov 23 04:17:16 localhost python3.9[142029]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None Nov 23 04:17:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28776 DF PROTO=TCP SPT=58000 DPT=9882 SEQ=1843972675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6EA0F20000000001030307) Nov 23 04:17:17 localhost python3.9[142121]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:17:17 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Nov 23 04:17:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20183 DF PROTO=TCP SPT=51488 DPT=9102 SEQ=1473239717 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6EA9D40000000001030307) Nov 23 04:17:19 localhost python3.9[142218]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts Nov 23 04:17:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23809 DF PROTO=TCP SPT=43990 DPT=9101 SEQ=4191565414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6EAE690000000001030307) Nov 23 04:17:20 localhost podman[142336]: 2025-11-23 09:17:20.555790969 +0000 UTC m=+0.095831352 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, RELEASE=main) Nov 23 04:17:20 localhost podman[142336]: 2025-11-23 09:17:20.682890498 +0000 UTC m=+0.222930851 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, GIT_BRANCH=main, GIT_CLEAN=True, version=7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., architecture=x86_64, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-type=git, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7) Nov 23 04:17:21 localhost python3.9[142480]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.46wkys2u follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:17:21 localhost python3.9[142608]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.46wkys2u mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889440.6536453-189-34539282101008/.source.46wkys2u _original_basename=.cdpr0gz8 follow=False checksum=86d7095ff15f9038e30789829322247c323137f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:17:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27105 DF PROTO=TCP SPT=55978 DPT=9102 SEQ=2549246086 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6EBA0F0000000001030307) Nov 23 04:17:24 localhost python3.9[142723]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:17:26 localhost python3.9[142815]: ansible-ansible.builtin.blockinfile Invoked with block=np0005532581.localdomain,192.168.122.103,np0005532581* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRibSMIP5+E9lJWuaKDEuCaJoGhGPTqff+o8SP2Twk+NhPOa5FC7WQhHPLXVhKAtlCX60ckYE53Q/H/RVRZ55JdWQLSdY/1tQCD6c0Ry6N+UD+mxo9iN9cHk6vd6J5kJu+v/gBEmFY1A9pjzsD1CTR8gZJHZFqbUTzXrKkoUjK3Kqa8UtvzyhgYQtYIaUwaf1z7CMNQ3A4EaGVKyRsVwb11jlaT9fjB43E3tp9p5EG6PPJEGux/Xea6iHnhSwZHpkD/ylneDOkBbGvYKhL33bpXMcbuHy32jAFr+2Q07sKvgy/b5/f/nTgNCyxEIpoXUbEhX+Vlh+gycU7KJw6FRyR3dQFjooV97NQ/oov2VP9DnTObziZA8lhaJ20ChTfDVUyvFCFi3dKgBUPCeNWCGI69eNHu3dQcwCNJ3kANqhHdkYpBd00PVBritJfxfzH1DCLo0I9CSi1buWYhein9VHZWtzePv/+ucWERRIo+J04QPkV+6P6vgOTRl5U75RctJU=#012np0005532581.localdomain,192.168.122.103,np0005532581* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG7auGqCubvIeT+Z8+DFgAyuqWDpDfRlZtndf8hFQOt7#012np0005532581.localdomain,192.168.122.103,np0005532581* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKGIbd1xFE29cgvdOZ+Uh6ipkdk4QfLnBLiJP+rzeHVtOUTgjR98CvJhrHQdGAxaTty6xRV53oj5EhBdMCJFc5I=#012np0005532584.localdomain,192.168.122.106,np0005532584* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3OrbPXlomvlluk5pGQwXwJu+cR1IMLHg5EnGcI5epB1SB6q/EzlEo5+bOYmmvILsoesUzBIBq21mRhn1Wi2yjlys0pArFDqiLkUBvTW9ro6MKci9Smc12m7AkLus6UO6h3pzqcOdRZQ3KOQDL/83yYJVBCJyqlISXWzzHJpGRVnZHeT4CgKZ1nG5UEvOrtPXRAVWkz3v5TghJrYXvWaPQPmWcEy1rfhCjkCfQY++JB/Dlgammmd1+ZldadeXQi1b2X02a6GFyW0pUMFLjAP7Wr+KcRa5FIPmGwsPuc1NhveAH6zyLrabrh7jPR5O0tBjz9KcNYXbQmJetGt9ZWzFsl0qzXrvI38q5RlGptbqg0iSez61VBAUtnfs33hnYc3dvzJKXReR76PoU3yu/tLrhdK6szqIVsMdw2LGEro7l3KKMKXHSpi8n77fH8ICiU3F5Oif+nvS/e7xr4LccSEnFEHA9PdNxOWxJYLcxTQCt3BkNFrWw4oB1LiDsn98HlS8=#012np0005532584.localdomain,192.168.122.106,np0005532584* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIACxkoVt3BLqmT5JuJibOj2srWJ99rHYxhxT/gCbLdIM#012np0005532584.localdomain,192.168.122.106,np0005532584* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJi5N6oeJPjl3EunvvHi6baJIH9ibE30q8MR/UiZkuoStWh4NAj+cNFWO47723JbHkDzCF1p+3RJ1FLROkiZ4W0=#012np0005532583.localdomain,192.168.122.105,np0005532583* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkB1Cq8AQaEBYTlv5Hzs024jg//D6wieNnvsI5WcYj7wckm9vKTJQfUD6yZBMmyPw6+vVzsM16bj2hagkDR5wkO7uSIaMqWrcoQ1h9HkJQLK8QB0iuzUvQzdr22kUgkLII8thNHK4VxF4VhAKNmzqCofZ4ZSaLUMwauFCFUjx1VJISEZdgYRZ4+++wAN5bdK+WrwSOAHJYJWQX2pRRsPiunSdY1BOUKB3sp7IBcQ3MDJgnKlkR7tiGSYB2W8JsLvIsIb0I2EaqmPUTIzKUuxSJnWEls/WyDT9MNkjhobVeAyFZ5TEik4OvobUhVGJ8CsU7O101KQNQ3IywPM+V0UpjA1yK49z5Qs0LjApmqORsTcjOojYaKGr9n64dVjXdFOMwajB9UmMEFtlIngm6kx7mJQGXqYxVAscW34JY832iKOEzQWrUSdo6mVJ7TXhYYcbdFp+G/128SfhNrbHwKinHeE9Nqu48BR7bmRZXO7ef+UMY1dG3AIvFt4JwFvLihZc=#012np0005532583.localdomain,192.168.122.105,np0005532583* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH4H0HJaVZZzbQbH92x/ePbqiic7VLTV0Kle7XvCiMNK#012np0005532583.localdomain,192.168.122.105,np0005532583* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEb0/1S3v0DC07ZQnLEp9URjtv9BKwGlPRsb47Ua8w+WgbOM0JmtKaPebzMcBow+04/+k7+HcCDBj6p5Yd4q3M4=#012np0005532585.localdomain,192.168.122.107,np0005532585* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCU6ocW8HWtJJyWPSFUqcN5z70XYnNrE5KeWh/VJ4bDkpVePpxxcdD8r8cKL121q0MKPRgia3jLqnKz+o4MH3AqTAWCZamBc1+ePq9OvZDenK69byea8TM176uYzfePjNlud4LSZ6lfkgneO5jeNE6/RcHgBc8Me+2mlzpavioA814r6Ci6hFaEIOS1Zd2b/yKzI4QRl6xg/aJKvlIe9w3G3BvKOG5pixPx2ng4wYc0OMtJb9ItJgZLY92GGuvVRwn9e0D4lab84+x/Nn3XatQdqU69ev7da/bQCUeBivyEZo03olh56YxCKvNfG3ZYwwhMTn9Hg/EdnwrGHYHj0ZgfSR1+Dzvnk0WW/MRs0276Ojj5O0hhnlaAh5n97W6fgHldGKvdEafYeD602C1Zkd+ISqF13W56MWhtUhiUsdUHShnpM/EBOITg6mTDFP1i/qMS0PjRaCzBpdqpJIoKzQpsi4Z3QTHTZ7uK/lqOEaE/wqXHuYlMKcTuOuX33gIp28k=#012np0005532585.localdomain,192.168.122.107,np0005532585* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILpJc3/w4q1RFXE8+NzyjCJ0R7ySeHFy75KPVpy/YiB/#012np0005532585.localdomain,192.168.122.107,np0005532585* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLtz4IM2aQZoQ7CuTS4jfYDH5LZPyutyvm+ZyFuW7jdHvK3umSrNYFwsqiHwWHvM9peuWot0GAUC8rCc1UO+ZWk=#012np0005532586.localdomain,192.168.122.108,np0005532586* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD6U4JggC29IKqxQ7GjhK23AehQb1S2zLryOxLwLEs9rP0qOZpJ9wR1VsBNLXDCmoRVTsH2+3V00hmkvlanKUuzgmLO61hdur+5NQD0xHnY7lOLpOoyR7hJiMuHj/nRgBLWY2OB8Gim121dgfuc2zRF92igDYe65Uf0et83vWlgRmc7KlziaJ91iVcBUmhGYf3Ij7QxfhQH5TTnGoQizdiBpuP+yVuU2AepbvQ8ZFvzioCwzWAVu/xfdRFp9QyLT4JP1jM6dadTjD5RUAjRL6qR1tLXVq/rvqtXSL8ruBSYm3NCOys9RtdrNolZ7frd+zmvF+VzMNLtlRxiuy1ReR+ZO3felB+4TwfEfLZ+DqE1s3+ksCQH/sVCrxzFsRz5lamWG3p78ZBWTiQ/7WdJS1dQOHz+pKNSSW/NYMIqitxsCsEWPJLq/EWoHVxvjREucCb5YvWHPKOv5RLlbm5lSHFLuFVV8O3AAzD/3JsjTbKGOjJhmtxPCgEy7RPqtIUX90s=#012np0005532586.localdomain,192.168.122.108,np0005532586* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKzaUMbW2RXGluOr1nHypPwK+dIm5zaIFHsNA8PEtRqK#012np0005532586.localdomain,192.168.122.108,np0005532586* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLLaE/jo8XH2dLl/mTc9NRhBP3x+ig/gy7tepiJNCqlj0Dgb5vfu6IYaFNrkyisiqhenCsUZQo/guhdX9Nisv9I=#012np0005532582.localdomain,192.168.122.104,np0005532582* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0v47OVdr7YS/5xSUmMc7u26O7OwPomkdDR6s8rrcencbx7seRSeU00QGeRQcJJ023bD3xk26W8iiJTRUDkYSy//cSfHODdDy+CNEfDUTkGzIjiApoLi2b+S4J6wcAldMsj02MZmx67vUHyM5Qwok+22XqopryL8BiGPJbnoUcZy773f5OKPPMNuj3Fyb7jd5mrC7awK4NniZHyHPYBQeBa234HL42fRjcOqCcxuauy5cbz9PeBv5/kg+nYc8cY5qCyLqNhzMVRUa/PcepMBcfThk17LtPGzCYS7IR2cGdUDP6Pe0QD34Hu6+mpwKwYx73v5uHcmy9CeZ8fK83/F84Lr6jxsiwoU2e+hUfzVRq8gnkjk6kuL86eSM2POSGgBYYgCb+Ma6lOkF1MA+rLAh0gAsUhBgVlz6HtaMoDvLOi/NrQeoQyNE1Pv4vPAndmGGc8A7JCtmCMk9VvMy0Ht4IOvtDJFfx1lg7NuMIKqePYTEk56p8wTUNM+BmdJEhFPU=#012np0005532582.localdomain,192.168.122.104,np0005532582* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEaLDeiqlvIGmYCK/pVle4dWQoWUl9JopG1HgV4OQwpm#012np0005532582.localdomain,192.168.122.104,np0005532582* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPG4t0LXPuGTxEFWkant9P4DDIM9mUsBdh3iJHN1QOZUHW9RJuWVAPGkYlb6jz2BktGBRNU2FJD+HyIE3L+OanQ=#012 create=True mode=0644 path=/tmp/ansible.46wkys2u state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:17:27 localhost python3.9[142907]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.46wkys2u' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:17:28 localhost python3.9[143001]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.46wkys2u state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:17:29 localhost systemd[1]: session-44.scope: Deactivated successfully. Nov 23 04:17:29 localhost systemd[1]: session-44.scope: Consumed 4.158s CPU time. Nov 23 04:17:29 localhost systemd-logind[760]: Session 44 logged out. Waiting for processes to exit. Nov 23 04:17:29 localhost systemd-logind[760]: Removed session 44. Nov 23 04:17:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63787 DF PROTO=TCP SPT=32828 DPT=9100 SEQ=3174737483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6ED41D0000000001030307) Nov 23 04:17:36 localhost sshd[143016]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:17:36 localhost systemd-logind[760]: New session 45 of user zuul. Nov 23 04:17:36 localhost systemd[1]: Started Session 45 of User zuul. Nov 23 04:17:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43404 DF PROTO=TCP SPT=41516 DPT=9105 SEQ=3508708549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6EEDA70000000001030307) Nov 23 04:17:37 localhost python3.9[143109]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:17:38 localhost python3.9[143205]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Nov 23 04:17:40 localhost python3.9[143299]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:17:41 localhost python3.9[143392]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:17:42 localhost python3.9[143485]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:17:42 localhost python3.9[143579]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:17:43 localhost python3.9[143674]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:17:44 localhost systemd-logind[760]: Session 45 logged out. Waiting for processes to exit. Nov 23 04:17:44 localhost systemd[1]: session-45.scope: Deactivated successfully. Nov 23 04:17:44 localhost systemd[1]: session-45.scope: Consumed 3.792s CPU time. Nov 23 04:17:44 localhost systemd-logind[760]: Removed session 45. Nov 23 04:17:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19412 DF PROTO=TCP SPT=55978 DPT=9882 SEQ=3160368588 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F16230000000001030307) Nov 23 04:17:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19413 DF PROTO=TCP SPT=55978 DPT=9882 SEQ=3160368588 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F1A0F0000000001030307) Nov 23 04:17:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29614 DF PROTO=TCP SPT=32902 DPT=9102 SEQ=2700126204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F1F040000000001030307) Nov 23 04:17:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19414 DF PROTO=TCP SPT=55978 DPT=9882 SEQ=3160368588 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F22100000000001030307) Nov 23 04:17:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29615 DF PROTO=TCP SPT=32902 DPT=9102 SEQ=2700126204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F230F0000000001030307) Nov 23 04:17:50 localhost sshd[143689]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:17:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55674 DF PROTO=TCP SPT=33696 DPT=9101 SEQ=1362554447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F23990000000001030307) Nov 23 04:17:50 localhost systemd-logind[760]: New session 46 of user zuul. Nov 23 04:17:50 localhost systemd[1]: Started Session 46 of User zuul. Nov 23 04:17:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55675 DF PROTO=TCP SPT=33696 DPT=9101 SEQ=1362554447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F27900000000001030307) Nov 23 04:17:51 localhost python3.9[143782]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:17:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29616 DF PROTO=TCP SPT=32902 DPT=9102 SEQ=2700126204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F2B0F0000000001030307) Nov 23 04:17:52 localhost python3.9[143878]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 04:17:53 localhost python3.9[143932]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 23 04:17:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29617 DF PROTO=TCP SPT=32902 DPT=9102 SEQ=2700126204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F3ACF0000000001030307) Nov 23 04:17:58 localhost python3.9[144024]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:18:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46552 DF PROTO=TCP SPT=37344 DPT=9100 SEQ=4073576727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F494E0000000001030307) Nov 23 04:18:00 localhost python3.9[144117]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:18:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46553 DF PROTO=TCP SPT=37344 DPT=9100 SEQ=4073576727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F4D4F0000000001030307) Nov 23 04:18:01 localhost python3.9[144209]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:18:02 localhost python3.9[144301]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated#012Core libraries or services have been updated since boot-up:#012 * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:18:02 localhost python3.9[144391]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 23 04:18:03 localhost python3.9[144481]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:18:04 localhost python3.9[144573]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:18:04 localhost systemd[1]: session-46.scope: Deactivated successfully. Nov 23 04:18:04 localhost systemd[1]: session-46.scope: Consumed 8.535s CPU time. Nov 23 04:18:04 localhost systemd-logind[760]: Session 46 logged out. Waiting for processes to exit. Nov 23 04:18:04 localhost systemd-logind[760]: Removed session 46. Nov 23 04:18:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29618 DF PROTO=TCP SPT=32902 DPT=9102 SEQ=2700126204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F5C0F0000000001030307) Nov 23 04:18:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46555 DF PROTO=TCP SPT=37344 DPT=9100 SEQ=4073576727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F650F0000000001030307) Nov 23 04:18:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32044 DF PROTO=TCP SPT=47508 DPT=9105 SEQ=3919263591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F6ED00000000001030307) Nov 23 04:18:10 localhost sshd[144589]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:18:10 localhost systemd-logind[760]: New session 47 of user zuul. Nov 23 04:18:10 localhost systemd[1]: Started Session 47 of User zuul. Nov 23 04:18:12 localhost python3.9[144682]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:18:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32045 DF PROTO=TCP SPT=47508 DPT=9105 SEQ=3919263591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F7E900000000001030307) Nov 23 04:18:14 localhost python3.9[144778]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:18:15 localhost python3.9[144870]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:18:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46556 DF PROTO=TCP SPT=37344 DPT=9100 SEQ=4073576727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F860F0000000001030307) Nov 23 04:18:16 localhost chronyd[136657]: Selected source 216.128.178.20 (pool.ntp.org) Nov 23 04:18:16 localhost python3.9[144943]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889495.0343232-181-207140642006907/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:18:16 localhost python3.9[145035]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:18:17 localhost python3.9[145127]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:18:18 localhost python3.9[145200]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889497.0958455-252-39332356340114/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:18:18 localhost python3.9[145292]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:18:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19417 DF PROTO=TCP SPT=55978 DPT=9882 SEQ=3160368588 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F920F0000000001030307) Nov 23 04:18:19 localhost python3.9[145384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:18:19 localhost python3.9[145457]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889498.9020867-324-83138574022510/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:18:20 localhost python3.9[145549]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:18:21 localhost python3.9[145641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:18:21 localhost python3.9[145714]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889500.6957831-397-161029034439220/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:18:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32046 DF PROTO=TCP SPT=47508 DPT=9105 SEQ=3919263591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6F9E100000000001030307) Nov 23 04:18:22 localhost python3.9[145806]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:18:22 localhost python3.9[145928]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:18:23 localhost python3.9[146033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889502.5161235-467-124569551932519/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:18:24 localhost python3.9[146140]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:18:24 localhost python3.9[146232]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:18:25 localhost python3.9[146305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889504.4197733-538-240022043537199/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:18:26 localhost python3.9[146397]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:18:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30244 DF PROTO=TCP SPT=43152 DPT=9102 SEQ=144294481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6FB00F0000000001030307) Nov 23 04:18:26 localhost python3.9[146489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:18:27 localhost python3.9[146562]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889506.28538-607-146035301277616/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:18:28 localhost python3.9[146654]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:18:28 localhost python3.9[146746]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:18:29 localhost python3.9[146819]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889508.1657617-679-189247269903204/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:18:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51995 DF PROTO=TCP SPT=53710 DPT=9100 SEQ=3884686519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6FBE7E0000000001030307) Nov 23 04:18:30 localhost systemd-logind[760]: Session 47 logged out. Waiting for processes to exit. Nov 23 04:18:30 localhost systemd[1]: session-47.scope: Deactivated successfully. Nov 23 04:18:30 localhost systemd[1]: session-47.scope: Consumed 11.442s CPU time. Nov 23 04:18:30 localhost systemd-logind[760]: Removed session 47. Nov 23 04:18:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51996 DF PROTO=TCP SPT=53710 DPT=9100 SEQ=3884686519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6FC28F0000000001030307) Nov 23 04:18:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30245 DF PROTO=TCP SPT=43152 DPT=9102 SEQ=144294481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6FD00F0000000001030307) Nov 23 04:18:36 localhost sshd[146834]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:18:36 localhost systemd-logind[760]: New session 48 of user zuul. Nov 23 04:18:36 localhost systemd[1]: Started Session 48 of User zuul. Nov 23 04:18:36 localhost python3.9[146929]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:18:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51998 DF PROTO=TCP SPT=53710 DPT=9100 SEQ=3884686519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6FDA4F0000000001030307) Nov 23 04:18:38 localhost python3.9[147021]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:18:39 localhost python3.9[147094]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889518.2038229-62-200217994100753/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=5f137984986c8cf5df5aec7749430e0dc129d0db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:18:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20600 DF PROTO=TCP SPT=57164 DPT=9105 SEQ=2425619593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6FE50F0000000001030307) Nov 23 04:18:40 localhost python3.9[147186]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:18:41 localhost python3.9[147259]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889519.609238-62-175875158746833/.source.conf _original_basename=ceph.conf follow=False checksum=d6d906a745260c838693e085b1f329bd1daad564 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:18:41 localhost systemd[1]: session-48.scope: Deactivated successfully. Nov 23 04:18:41 localhost systemd[1]: session-48.scope: Consumed 2.225s CPU time. Nov 23 04:18:41 localhost systemd-logind[760]: Session 48 logged out. Waiting for processes to exit. Nov 23 04:18:41 localhost systemd-logind[760]: Removed session 48. Nov 23 04:18:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20601 DF PROTO=TCP SPT=57164 DPT=9105 SEQ=2425619593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B6FF4CF0000000001030307) Nov 23 04:18:47 localhost sshd[147274]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:18:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38872 DF PROTO=TCP SPT=56554 DPT=9882 SEQ=2619486655 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7000830000000001030307) Nov 23 04:18:47 localhost systemd-logind[760]: New session 49 of user zuul. Nov 23 04:18:47 localhost systemd[1]: Started Session 49 of User zuul. Nov 23 04:18:48 localhost python3.9[147367]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:18:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65296 DF PROTO=TCP SPT=47678 DPT=9882 SEQ=3140594406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B70080F0000000001030307) Nov 23 04:18:49 localhost python3.9[147463]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:18:49 localhost python3.9[147555]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 23 04:18:51 localhost python3.9[147645]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:18:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45834 DF PROTO=TCP SPT=49692 DPT=9101 SEQ=12117885 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7014100000000001030307) Nov 23 04:18:52 localhost python3.9[147737]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Nov 23 04:18:53 localhost python3.9[147829]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 04:18:54 localhost python3.9[147883]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:18:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5868 DF PROTO=TCP SPT=32934 DPT=9102 SEQ=3106431952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B70250F0000000001030307) Nov 23 04:18:59 localhost python3.9[147977]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 23 04:19:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10760 DF PROTO=TCP SPT=53618 DPT=9100 SEQ=320434165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7033AD0000000001030307) Nov 23 04:19:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10761 DF PROTO=TCP SPT=53618 DPT=9100 SEQ=320434165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7037CF0000000001030307) Nov 23 04:19:01 localhost python3[148072]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012 rule:#012 proto: udp#012 dport: 4789#012- rule_name: 119 neutron geneve networks#012 rule:#012 proto: udp#012 dport: 6081#012 state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: OUTPUT#012 jump: NOTRACK#012 action: append#012 state: []#012- rule_name: 121 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: PREROUTING#012 jump: NOTRACK#012 action: append#012 state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present Nov 23 04:19:02 localhost python3.9[148164]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:02 localhost python3.9[148256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:19:03 localhost python3.9[148304]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:04 localhost python3.9[148396]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:19:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46558 DF PROTO=TCP SPT=37344 DPT=9100 SEQ=4073576727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7044100000000001030307) Nov 23 04:19:04 localhost python3.9[148444]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._8g2h2vc recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:06 localhost python3.9[148536]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:19:06 localhost python3.9[148584]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10763 DF PROTO=TCP SPT=53618 DPT=9100 SEQ=320434165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B704F8F0000000001030307) Nov 23 04:19:07 localhost python3.9[148676]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:19:08 localhost python3[148769]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Nov 23 04:19:09 localhost python3.9[148861]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:19:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22215 DF PROTO=TCP SPT=46686 DPT=9105 SEQ=3103607670 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B70594F0000000001030307) Nov 23 04:19:10 localhost python3.9[148936]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889549.287477-431-189504716045843/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:11 localhost python3.9[149028]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:19:12 localhost python3.9[149103]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889550.6701849-476-224370927780234/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:12 localhost python3.9[149195]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:19:13 localhost python3.9[149270]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889552.36419-521-1740825937709/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22216 DF PROTO=TCP SPT=46686 DPT=9105 SEQ=3103607670 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B70690F0000000001030307) Nov 23 04:19:14 localhost python3.9[149362]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:19:14 localhost python3.9[149437]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889553.5679166-566-50213762216276/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:15 localhost python3.9[149529]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:19:15 localhost python3.9[149604]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889554.8730717-611-142964976862098/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59943 DF PROTO=TCP SPT=48412 DPT=9882 SEQ=2295952625 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7075B30000000001030307) Nov 23 04:19:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53769 DF PROTO=TCP SPT=54622 DPT=9102 SEQ=3480958286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B707E930000000001030307) Nov 23 04:19:19 localhost python3.9[149696]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:20 localhost python3.9[149788]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:19:21 localhost python3.9[149883]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22217 DF PROTO=TCP SPT=46686 DPT=9105 SEQ=3103607670 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B708A0F0000000001030307) Nov 23 04:19:22 localhost python3.9[149975]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:19:23 localhost python3.9[150068]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:19:23 localhost python3.9[150162]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:19:24 localhost python3.9[150287]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:25 localhost python3.9[150423]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:19:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53772 DF PROTO=TCP SPT=54622 DPT=9102 SEQ=3480958286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B709A4F0000000001030307) Nov 23 04:19:26 localhost python3.9[150516]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005532584.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:1d:b8:fa:41" external_ids:ovn-encap-ip=172.19.0.106 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:19:27 localhost ovs-vsctl[150517]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005532584.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:1d:b8:fa:41 external_ids:ovn-encap-ip=172.19.0.106 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch Nov 23 04:19:27 localhost python3.9[150609]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:19:28 localhost python3.9[150702]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:19:29 localhost python3.9[150796]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:19:30 localhost python3.9[150888]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:19:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29595 DF PROTO=TCP SPT=37014 DPT=9100 SEQ=1288570538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B70A8DE0000000001030307) Nov 23 04:19:30 localhost python3.9[150936]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:19:31 localhost python3.9[151028]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:19:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29596 DF PROTO=TCP SPT=37014 DPT=9100 SEQ=1288570538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B70ACCF0000000001030307) Nov 23 04:19:31 localhost python3.9[151076]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:19:32 localhost python3.9[151168]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:33 localhost python3.9[151260]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:19:33 localhost python3.9[151308]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52001 DF PROTO=TCP SPT=53710 DPT=9100 SEQ=3884686519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B70B80F0000000001030307) Nov 23 04:19:34 localhost python3.9[151400]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:19:35 localhost python3.9[151448]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:36 localhost python3.9[151540]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:19:36 localhost systemd[1]: Reloading. Nov 23 04:19:36 localhost systemd-sysv-generator[151570]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:19:36 localhost systemd-rc-local-generator[151565]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:19:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:19:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29598 DF PROTO=TCP SPT=37014 DPT=9100 SEQ=1288570538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B70C48F0000000001030307) Nov 23 04:19:37 localhost python3.9[151670]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:19:37 localhost python3.9[151718]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:38 localhost python3.9[151810]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:19:38 localhost python3.9[151858]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:39 localhost python3.9[151950]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:19:39 localhost systemd[1]: Reloading. Nov 23 04:19:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65000 DF PROTO=TCP SPT=40738 DPT=9105 SEQ=968831697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B70CE8F0000000001030307) Nov 23 04:19:39 localhost systemd-rc-local-generator[151973]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:19:39 localhost systemd-sysv-generator[151980]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:19:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:19:40 localhost systemd[1]: Starting Create netns directory... Nov 23 04:19:40 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 23 04:19:40 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 23 04:19:40 localhost systemd[1]: Finished Create netns directory. Nov 23 04:19:40 localhost python3.9[152085]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:19:41 localhost python3.9[152177]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:19:42 localhost python3.9[152250]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889581.2117665-1343-25084639418854/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 23 04:19:43 localhost python3.9[152342]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:19:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65001 DF PROTO=TCP SPT=40738 DPT=9105 SEQ=968831697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B70DE4F0000000001030307) Nov 23 04:19:43 localhost python3.9[152434]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:19:44 localhost python3.9[152509]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889583.4993284-1418-70056332948568/.source.json _original_basename=.vomxzifd follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:45 localhost python3.9[152601]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49292 DF PROTO=TCP SPT=43240 DPT=9882 SEQ=2705547107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B70EAE30000000001030307) Nov 23 04:19:48 localhost python3.9[152858]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False Nov 23 04:19:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59948 DF PROTO=TCP SPT=48412 DPT=9882 SEQ=2295952625 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B70F2100000000001030307) Nov 23 04:19:49 localhost python3.9[152950]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 23 04:19:50 localhost python3.9[153042]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Nov 23 04:19:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65002 DF PROTO=TCP SPT=40738 DPT=9105 SEQ=968831697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B70FE0F0000000001030307) Nov 23 04:19:55 localhost python3[153161]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Nov 23 04:19:55 localhost python3[153161]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c",#012 "Digest": "sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-11-21T06:40:43.504967825Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251118",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 345731014,#012 "VirtualSize": 345731014,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/0ff11ed3154c8bbd91096301c9cfc5b95bbe726d99c5650ba8d355053fb0bbad/diff:/var/lib/containers/storage/overlay/6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068/diff:/var/lib/containers/storage/overlay/ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/d16160b7dcc2f7ec400dce38b825ab93d5279c0ca0a9a7ff351e435b4aeeea92/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/d16160b7dcc2f7ec400dce38b825ab93d5279c0ca0a9a7ff351e435b4aeeea92/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6",#012 "sha256:573e98f577c8b1610c1485067040ff856a142394fcd22ad4cb9c66b7d1de6bef",#012 "sha256:2e0f9ca9a8387a3566096aacaecfe5797e3fc2585f07cb97a1706897fa1a86a3",#012 "sha256:db37b2d335b44e6a9cb2eb88713051bc469233d1e0a06670f1303bc9539b97a0"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251118",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-18T01:56:49.795434035Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6d427dd138d2b0977a7ef7feaa8bd82d04e99cc5f4a16d555d6cff0cb52d43c6 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-18T01:56:49.795512415Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251118\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-18T01:56:52.547242013Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-11-21T06:10:01.947310748Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947327778Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947358359Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947372589Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.94738527Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.94739397Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:02.324930938Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:36.349393468Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:39.924297673Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-li Nov 23 04:19:55 localhost podman[153214]: 2025-11-23 09:19:55.436809176 +0000 UTC m=+0.096025405 container remove e8a40d17960b07fddd814ec3beb5de6f553f7a8b0ae9bc1ac3c692713aee8736 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=ovn_controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044) Nov 23 04:19:55 localhost python3[153161]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller Nov 23 04:19:55 localhost podman[153229]: Nov 23 04:19:55 localhost podman[153229]: 2025-11-23 09:19:55.547583046 +0000 UTC m=+0.088147471 container create 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller) Nov 23 04:19:55 localhost podman[153229]: 2025-11-23 09:19:55.506195497 +0000 UTC m=+0.046759962 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Nov 23 04:19:55 localhost python3[153161]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Nov 23 04:19:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1587 DF PROTO=TCP SPT=43306 DPT=9102 SEQ=1914099455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B710F8F0000000001030307) Nov 23 04:19:56 localhost python3.9[153357]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:19:59 localhost python3.9[153451]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:19:59 localhost python3.9[153497]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:20:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14417 DF PROTO=TCP SPT=34210 DPT=9100 SEQ=3767429928 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B711E0E0000000001030307) Nov 23 04:20:00 localhost python3.9[153588]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763889599.6682081-1682-247328179642221/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:20:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14418 DF PROTO=TCP SPT=34210 DPT=9100 SEQ=3767429928 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B71220F0000000001030307) Nov 23 04:20:01 localhost python3.9[153634]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:20:01 localhost systemd[1]: Reloading. Nov 23 04:20:01 localhost systemd-sysv-generator[153660]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:20:01 localhost systemd-rc-local-generator[153657]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:20:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:20:02 localhost python3.9[153716]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:20:03 localhost systemd[1]: Reloading. Nov 23 04:20:03 localhost systemd-sysv-generator[153748]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:20:03 localhost systemd-rc-local-generator[153744]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:20:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:20:03 localhost systemd[1]: Starting ovn_controller container... Nov 23 04:20:03 localhost systemd[1]: Started libcrun container. Nov 23 04:20:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f010af5280dc7f99ff5dcd6640122c45a7b9a57cf816e40434e332ff060f4c/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Nov 23 04:20:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:20:03 localhost podman[153757]: 2025-11-23 09:20:03.736369012 +0000 UTC m=+0.156301479 container init 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Nov 23 04:20:03 localhost ovn_controller[153771]: + sudo -E kolla_set_configs Nov 23 04:20:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:20:03 localhost podman[153757]: 2025-11-23 09:20:03.768684868 +0000 UTC m=+0.188617335 container start 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller) Nov 23 04:20:03 localhost edpm-start-podman-container[153757]: ovn_controller Nov 23 04:20:03 localhost systemd[1]: Created slice User Slice of UID 0. Nov 23 04:20:03 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Nov 23 04:20:03 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Nov 23 04:20:03 localhost systemd[1]: Starting User Manager for UID 0... Nov 23 04:20:03 localhost podman[153779]: 2025-11-23 09:20:03.877547195 +0000 UTC m=+0.101568353 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:20:03 localhost podman[153779]: 2025-11-23 09:20:03.890081145 +0000 UTC m=+0.114102323 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:20:03 localhost podman[153779]: unhealthy Nov 23 04:20:03 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:20:03 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Failed with result 'exit-code'. Nov 23 04:20:03 localhost edpm-start-podman-container[153756]: Creating additional drop-in dependency for "ovn_controller" (900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291) Nov 23 04:20:04 localhost systemd[1]: Reloading. Nov 23 04:20:04 localhost systemd[153799]: Queued start job for default target Main User Target. Nov 23 04:20:04 localhost systemd[153799]: Created slice User Application Slice. Nov 23 04:20:04 localhost systemd[153799]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Nov 23 04:20:04 localhost systemd[153799]: Started Daily Cleanup of User's Temporary Directories. Nov 23 04:20:04 localhost systemd[153799]: Reached target Paths. Nov 23 04:20:04 localhost systemd[153799]: Reached target Timers. Nov 23 04:20:04 localhost systemd[153799]: Starting D-Bus User Message Bus Socket... Nov 23 04:20:04 localhost systemd[153799]: Starting Create User's Volatile Files and Directories... Nov 23 04:20:04 localhost systemd[153799]: Listening on D-Bus User Message Bus Socket. Nov 23 04:20:04 localhost systemd[153799]: Finished Create User's Volatile Files and Directories. Nov 23 04:20:04 localhost systemd[153799]: Reached target Sockets. Nov 23 04:20:04 localhost systemd[153799]: Reached target Basic System. Nov 23 04:20:04 localhost systemd[153799]: Reached target Main User Target. Nov 23 04:20:04 localhost systemd[153799]: Startup finished in 155ms. Nov 23 04:20:04 localhost systemd-rc-local-generator[153859]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:20:04 localhost systemd-sysv-generator[153864]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:20:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:20:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10766 DF PROTO=TCP SPT=53618 DPT=9100 SEQ=320434165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B712E0F0000000001030307) Nov 23 04:20:04 localhost systemd[1]: Started User Manager for UID 0. Nov 23 04:20:04 localhost systemd[1]: Started ovn_controller container. Nov 23 04:20:04 localhost systemd[1]: Started Session c11 of User root. Nov 23 04:20:04 localhost ovn_controller[153771]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 23 04:20:04 localhost ovn_controller[153771]: INFO:__main__:Validating config file Nov 23 04:20:04 localhost ovn_controller[153771]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 23 04:20:04 localhost ovn_controller[153771]: INFO:__main__:Writing out command to execute Nov 23 04:20:04 localhost systemd[1]: session-c11.scope: Deactivated successfully. Nov 23 04:20:04 localhost ovn_controller[153771]: ++ cat /run_command Nov 23 04:20:04 localhost ovn_controller[153771]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Nov 23 04:20:04 localhost ovn_controller[153771]: + ARGS= Nov 23 04:20:04 localhost ovn_controller[153771]: + sudo kolla_copy_cacerts Nov 23 04:20:04 localhost systemd[1]: Started Session c12 of User root. Nov 23 04:20:04 localhost systemd[1]: session-c12.scope: Deactivated successfully. Nov 23 04:20:04 localhost ovn_controller[153771]: + [[ ! -n '' ]] Nov 23 04:20:04 localhost ovn_controller[153771]: + . kolla_extend_start Nov 23 04:20:04 localhost ovn_controller[153771]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\''' Nov 23 04:20:04 localhost ovn_controller[153771]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Nov 23 04:20:04 localhost ovn_controller[153771]: + umask 0022 Nov 23 04:20:04 localhost ovn_controller[153771]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8] Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00004|main|INFO|OVS IDL reconnected, force recompute. Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting... Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00006|main|INFO|OVNSB IDL reconnected, force recompute. Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00011|features|INFO|OVS Feature: ct_flush, state: supported Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00013|main|INFO|OVS feature set changed, force recompute. Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00017|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms) Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00018|main|INFO|OVS OpenFlow connection reconnected,force recompute. Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00021|main|INFO|OVS feature set changed, force recompute. Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4 Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Nov 23 04:20:04 localhost ovn_controller[153771]: 2025-11-23T09:20:04Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Nov 23 04:20:05 localhost python3.9[153970]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:20:05 localhost ovs-vsctl[153971]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload Nov 23 04:20:05 localhost python3.9[154063]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:20:05 localhost ovs-vsctl[154065]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids Nov 23 04:20:06 localhost python3.9[154158]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:20:06 localhost ovs-vsctl[154159]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options Nov 23 04:20:07 localhost systemd-logind[760]: Session 49 logged out. Waiting for processes to exit. Nov 23 04:20:07 localhost systemd[1]: session-49.scope: Deactivated successfully. Nov 23 04:20:07 localhost systemd[1]: session-49.scope: Consumed 40.867s CPU time. Nov 23 04:20:07 localhost systemd-logind[760]: Removed session 49. Nov 23 04:20:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14420 DF PROTO=TCP SPT=34210 DPT=9100 SEQ=3767429928 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7139CF0000000001030307) Nov 23 04:20:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7650 DF PROTO=TCP SPT=54528 DPT=9105 SEQ=3496641091 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B71438F0000000001030307) Nov 23 04:20:13 localhost sshd[154174]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:20:13 localhost systemd-logind[760]: New session 51 of user zuul. Nov 23 04:20:13 localhost systemd[1]: Started Session 51 of User zuul. Nov 23 04:20:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7651 DF PROTO=TCP SPT=54528 DPT=9105 SEQ=3496641091 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B71534F0000000001030307) Nov 23 04:20:14 localhost python3.9[154267]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:20:14 localhost systemd[1]: Stopping User Manager for UID 0... Nov 23 04:20:14 localhost systemd[153799]: Activating special unit Exit the Session... Nov 23 04:20:14 localhost systemd[153799]: Stopped target Main User Target. Nov 23 04:20:14 localhost systemd[153799]: Stopped target Basic System. Nov 23 04:20:14 localhost systemd[153799]: Stopped target Paths. Nov 23 04:20:14 localhost systemd[153799]: Stopped target Sockets. Nov 23 04:20:14 localhost systemd[153799]: Stopped target Timers. Nov 23 04:20:14 localhost systemd[153799]: Stopped Daily Cleanup of User's Temporary Directories. Nov 23 04:20:14 localhost systemd[153799]: Closed D-Bus User Message Bus Socket. Nov 23 04:20:14 localhost systemd[153799]: Stopped Create User's Volatile Files and Directories. Nov 23 04:20:14 localhost systemd[153799]: Removed slice User Application Slice. Nov 23 04:20:14 localhost systemd[153799]: Reached target Shutdown. Nov 23 04:20:14 localhost systemd[153799]: Finished Exit the Session. Nov 23 04:20:14 localhost systemd[153799]: Reached target Exit the Session. Nov 23 04:20:14 localhost systemd[1]: user@0.service: Deactivated successfully. Nov 23 04:20:14 localhost systemd[1]: Stopped User Manager for UID 0. Nov 23 04:20:14 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Nov 23 04:20:14 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Nov 23 04:20:14 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Nov 23 04:20:14 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Nov 23 04:20:14 localhost systemd[1]: Removed slice User Slice of UID 0. Nov 23 04:20:15 localhost python3.9[154365]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:16 localhost python3.9[154457]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:16 localhost python3.9[154549]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22710 DF PROTO=TCP SPT=42648 DPT=9882 SEQ=593305082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7160130000000001030307) Nov 23 04:20:17 localhost python3.9[154641]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:17 localhost python3.9[154733]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:18 localhost python3.9[154823]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:20:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10376 DF PROTO=TCP SPT=57122 DPT=9102 SEQ=3270248482 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7168F40000000001030307) Nov 23 04:20:19 localhost python3.9[154915]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Nov 23 04:20:20 localhost python3.9[155005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:20:21 localhost python3.9[155078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889619.9732175-218-131757763613276/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:21 localhost python3.9[155168]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:20:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62182 DF PROTO=TCP SPT=54862 DPT=9101 SEQ=742592023 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B71740F0000000001030307) Nov 23 04:20:22 localhost python3.9[155241]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889621.4034824-263-20985523418392/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:23 localhost python3.9[155334]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 04:20:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22915 DF PROTO=TCP SPT=52324 DPT=9101 SEQ=871197761 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B717E0F0000000001030307) Nov 23 04:20:24 localhost python3.9[155388]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:20:29 localhost python3.9[155560]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 23 04:20:29 localhost python3.9[155653]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:20:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19180 DF PROTO=TCP SPT=45408 DPT=9100 SEQ=4027144144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B71933F0000000001030307) Nov 23 04:20:30 localhost python3.9[155724]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889629.4108644-374-231814076764708/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:30 localhost python3.9[155814]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:20:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19181 DF PROTO=TCP SPT=45408 DPT=9100 SEQ=4027144144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B71974F0000000001030307) Nov 23 04:20:31 localhost python3.9[155885]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889630.5402138-374-274140483347387/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:33 localhost python3.9[155975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:20:33 localhost python3.9[156046]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889632.6601484-506-114598135322346/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29601 DF PROTO=TCP SPT=37014 DPT=9100 SEQ=1288570538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B71A20F0000000001030307) Nov 23 04:20:34 localhost python3.9[156136]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:20:34 localhost python3.9[156207]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889633.74963-506-221457170318350/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:20:34 localhost podman[156222]: 2025-11-23 09:20:34.909314567 +0000 UTC m=+0.090437279 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 04:20:34 localhost ovn_controller[153771]: 2025-11-23T09:20:34Z|00023|memory|INFO|12924 kB peak resident set size after 30.5 seconds Nov 23 04:20:34 localhost ovn_controller[153771]: 2025-11-23T09:20:34Z|00024|memory|INFO|idl-cells-OVN_Southbound:4028 idl-cells-Open_vSwitch:813 ofctrl_desired_flow_usage-KB:9 ofctrl_installed_flow_usage-KB:7 ofctrl_sb_flow_ref_usage-KB:3 Nov 23 04:20:34 localhost podman[156222]: 2025-11-23 09:20:34.959574029 +0000 UTC m=+0.140696711 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:20:34 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:20:35 localhost python3.9[156320]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:20:36 localhost python3.9[156414]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:36 localhost python3.9[156506]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:20:37 localhost python3.9[156554]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19183 DF PROTO=TCP SPT=45408 DPT=9100 SEQ=4027144144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B71AF100000000001030307) Nov 23 04:20:37 localhost python3.9[156646]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:20:38 localhost python3.9[156694]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:39 localhost python3.9[156786]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:20:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8419 DF PROTO=TCP SPT=47404 DPT=9105 SEQ=1302848862 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B71B8CF0000000001030307) Nov 23 04:20:39 localhost python3.9[156878]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:20:40 localhost python3.9[156926]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:20:41 localhost python3.9[157018]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:20:41 localhost python3.9[157066]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:20:42 localhost python3.9[157158]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:20:42 localhost systemd[1]: Reloading. Nov 23 04:20:42 localhost systemd-sysv-generator[157187]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:20:42 localhost systemd-rc-local-generator[157181]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:20:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:20:43 localhost python3.9[157288]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:20:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8420 DF PROTO=TCP SPT=47404 DPT=9105 SEQ=1302848862 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B71C88F0000000001030307) Nov 23 04:20:44 localhost python3.9[157336]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:20:44 localhost python3.9[157428]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:20:45 localhost python3.9[157476]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:20:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19184 DF PROTO=TCP SPT=45408 DPT=9100 SEQ=4027144144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B71D0100000000001030307) Nov 23 04:20:47 localhost python3.9[157568]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:20:47 localhost systemd[1]: Reloading. Nov 23 04:20:47 localhost systemd-rc-local-generator[157595]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:20:47 localhost systemd-sysv-generator[157598]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:20:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:20:47 localhost systemd[1]: Starting Create netns directory... Nov 23 04:20:47 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 23 04:20:47 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 23 04:20:47 localhost systemd[1]: Finished Create netns directory. Nov 23 04:20:48 localhost python3.9[157702]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22715 DF PROTO=TCP SPT=42648 DPT=9882 SEQ=593305082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B71DC100000000001030307) Nov 23 04:20:48 localhost python3.9[157794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:20:49 localhost python3.9[157867]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889648.5319803-959-51680375956059/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:50 localhost python3.9[157959]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:20:51 localhost python3.9[158051]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:20:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8421 DF PROTO=TCP SPT=47404 DPT=9105 SEQ=1302848862 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B71E8100000000001030307) Nov 23 04:20:52 localhost python3.9[158126]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889650.7285237-1034-10601075539407/.source.json _original_basename=.t405yn6_ follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:20:52 localhost python3.9[158218]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:20:55 localhost python3.9[158475]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False Nov 23 04:20:56 localhost python3.9[158567]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 23 04:20:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49615 DF PROTO=TCP SPT=48358 DPT=9102 SEQ=234355168 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B71F9D00000000001030307) Nov 23 04:20:57 localhost python3.9[158659]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Nov 23 04:21:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31251 DF PROTO=TCP SPT=50912 DPT=9100 SEQ=1126995622 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B72086E0000000001030307) Nov 23 04:21:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31252 DF PROTO=TCP SPT=50912 DPT=9100 SEQ=1126995622 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B720C8F0000000001030307) Nov 23 04:21:01 localhost python3[158777]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Nov 23 04:21:01 localhost python3[158777]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9",#012 "Digest": "sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-11-21T06:31:40.431364621Z",#012 "Config": {#012 "User": "neutron",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251118",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 784198911,#012 "VirtualSize": 784198911,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/7f2203bb12e8263bda654cf994f0f457cee81cbd85ac1474f70f0dcdab850bfc/diff:/var/lib/containers/storage/overlay/cb3e7a7b413bc69102c7d8435b32176125a43d794f5f87a0ef3f45710221e344/diff:/var/lib/containers/storage/overlay/0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c/diff:/var/lib/containers/storage/overlay/6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068/diff:/var/lib/containers/storage/overlay/ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/94bc28862446c9d52bea5cb761ece58f4b7ce0b6f4d30585ec973efe7d5007f7/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/94bc28862446c9d52bea5cb761ece58f4b7ce0b6f4d30585ec973efe7d5007f7/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6",#012 "sha256:573e98f577c8b1610c1485067040ff856a142394fcd22ad4cb9c66b7d1de6bef",#012 "sha256:5a71e5d7d31f15255619cb8b9384b708744757c93993652418b0f45b0c0931d5",#012 "sha256:03228f16e908b0892695bcc077f4378f9669ff86bd51a3747df5ce9269c56477",#012 "sha256:1bc9c5b4c351caaeaa6b900805b43669e78b079f06d9048393517dd05690b8dc",#012 "sha256:83d6638c009d9ced6da21e0f659e23221a9a8d7c283582e370f21a7551100a49"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251118",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "neutron",#012 "History": [#012 {#012 "created": "2025-11-18T01:56:49.795434035Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6d427dd138d2b0977a7ef7feaa8bd82d04e99cc5f4a16d555d6cff0cb52d43c6 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-18T01:56:49.795512415Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251118\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-18T01:56:52.547242013Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-11-21T06:10:01.947310748Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947327778Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947358359Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947372589Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.94738527Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.94739397Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:02.324930938Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:36.349393468Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf Nov 23 04:21:01 localhost podman[158827]: 2025-11-23 09:21:01.901844612 +0000 UTC m=+0.096525602 container remove f03c7872eb9e79cd0e5c973dfb6c58d18c5a74bc0d489274787f7eba60036745 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'aab643b40a0a602c64733b2a96099834'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Nov 23 04:21:01 localhost python3[158777]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent Nov 23 04:21:02 localhost podman[158842]: Nov 23 04:21:02 localhost podman[158842]: 2025-11-23 09:21:02.01749113 +0000 UTC m=+0.093833383 container create 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible) Nov 23 04:21:02 localhost podman[158842]: 2025-11-23 09:21:01.972649358 +0000 UTC m=+0.048991641 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Nov 23 04:21:02 localhost python3[158777]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Nov 23 04:21:02 localhost python3.9[158970]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:21:04 localhost python3.9[159064]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:21:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14423 DF PROTO=TCP SPT=34210 DPT=9100 SEQ=3767429928 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B72180F0000000001030307) Nov 23 04:21:04 localhost python3.9[159110]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:21:05 localhost python3.9[159201]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763889664.5607557-1298-222771286383903/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:21:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:21:05 localhost podman[159233]: 2025-11-23 09:21:05.909914347 +0000 UTC m=+0.088492553 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:21:05 localhost systemd[1]: tmp-crun.NEfFQl.mount: Deactivated successfully. Nov 23 04:21:06 localhost podman[159233]: 2025-11-23 09:21:06.002525338 +0000 UTC m=+0.181103564 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:21:06 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:21:06 localhost python3.9[159260]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:21:06 localhost systemd[1]: Reloading. Nov 23 04:21:06 localhost systemd-rc-local-generator[159299]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:21:06 localhost systemd-sysv-generator[159303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:21:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:21:07 localhost python3.9[159354]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:21:07 localhost systemd[1]: Reloading. Nov 23 04:21:07 localhost systemd-rc-local-generator[159383]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:21:07 localhost systemd-sysv-generator[159386]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:21:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31254 DF PROTO=TCP SPT=50912 DPT=9100 SEQ=1126995622 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B72244F0000000001030307) Nov 23 04:21:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:21:07 localhost systemd[1]: Starting ovn_metadata_agent container... Nov 23 04:21:07 localhost systemd[1]: Started libcrun container. Nov 23 04:21:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b151a1a3e3d589b23a7d96526642621541268c0eeaa09aef90eb2cdc22f97c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Nov 23 04:21:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b151a1a3e3d589b23a7d96526642621541268c0eeaa09aef90eb2cdc22f97c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 04:21:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:21:07 localhost podman[159396]: 2025-11-23 09:21:07.693881949 +0000 UTC m=+0.153956243 container init 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: + sudo -E kolla_set_configs Nov 23 04:21:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:21:07 localhost podman[159396]: 2025-11-23 09:21:07.737432188 +0000 UTC m=+0.197506452 container start 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118) Nov 23 04:21:07 localhost edpm-start-podman-container[159396]: ovn_metadata_agent Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Validating config file Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Copying service configuration files Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Writing out command to execute Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Setting permission for /var/lib/neutron Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Setting permission for /var/lib/neutron/external Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: ++ cat /run_command Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: + CMD=neutron-ovn-metadata-agent Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: + ARGS= Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: + sudo kolla_copy_cacerts Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: + [[ ! -n '' ]] Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: + . kolla_extend_start Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\''' Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: Running command: 'neutron-ovn-metadata-agent' Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: + umask 0022 Nov 23 04:21:07 localhost ovn_metadata_agent[159410]: + exec neutron-ovn-metadata-agent Nov 23 04:21:07 localhost podman[159418]: 2025-11-23 09:21:07.83007152 +0000 UTC m=+0.084197662 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:21:07 localhost podman[159418]: 2025-11-23 09:21:07.835431339 +0000 UTC m=+0.089557461 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 23 04:21:07 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:21:07 localhost edpm-start-podman-container[159395]: Creating additional drop-in dependency for "ovn_metadata_agent" (219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0) Nov 23 04:21:07 localhost systemd[1]: Reloading. Nov 23 04:21:08 localhost systemd-sysv-generator[159491]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:21:08 localhost systemd-rc-local-generator[159486]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:21:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:21:08 localhost systemd[1]: tmp-crun.D4mMWD.mount: Deactivated successfully. Nov 23 04:21:08 localhost systemd[1]: Started ovn_metadata_agent container. Nov 23 04:21:08 localhost systemd[1]: session-51.scope: Deactivated successfully. Nov 23 04:21:08 localhost systemd[1]: session-51.scope: Consumed 32.734s CPU time. Nov 23 04:21:08 localhost systemd-logind[760]: Session 51 logged out. Waiting for processes to exit. Nov 23 04:21:08 localhost systemd-logind[760]: Removed session 51. Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.629 159415 INFO neutron.common.config [-] Logging enabled!#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.630 159415 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.630 159415 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.630 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.630 159415 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.631 159415 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.631 159415 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.631 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.631 159415 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.631 159415 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.631 159415 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.631 159415 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.631 159415 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.631 159415 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.631 159415 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.633 159415 DEBUG neutron.agent.ovn.metadata_agent [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.634 159415 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.634 159415 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.634 159415 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.634 159415 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.635 159415 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.635 159415 DEBUG neutron.agent.ovn.metadata_agent [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.635 159415 DEBUG neutron.agent.ovn.metadata_agent [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.635 159415 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.636 159415 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.636 159415 DEBUG neutron.agent.ovn.metadata_agent [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.636 159415 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.637 159415 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.637 159415 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.637 159415 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.638 159415 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.638 159415 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.638 159415 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.638 159415 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.639 159415 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.639 159415 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.639 159415 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.639 159415 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.640 159415 DEBUG neutron.agent.ovn.metadata_agent [-] host = np0005532584.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.640 159415 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.640 159415 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.640 159415 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.641 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.641 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.641 159415 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.642 159415 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.642 159415 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.642 159415 DEBUG neutron.agent.ovn.metadata_agent [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.642 159415 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.642 159415 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.643 159415 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.643 159415 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.643 159415 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.643 159415 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.643 159415 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.644 159415 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.644 159415 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.644 159415 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.644 159415 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.645 159415 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.645 159415 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.645 159415 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.645 159415 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.645 159415 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.646 159415 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.646 159415 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.646 159415 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.646 159415 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.647 159415 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.647 159415 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.647 159415 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.647 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.648 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.648 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.648 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.649 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.649 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.649 159415 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.649 159415 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.649 159415 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.650 159415 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.650 159415 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.650 159415 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.650 159415 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.651 159415 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.651 159415 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.651 159415 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.651 159415 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.651 159415 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.652 159415 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.652 159415 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.652 159415 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.652 159415 DEBUG neutron.agent.ovn.metadata_agent [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.653 159415 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.653 159415 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.653 159415 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.653 159415 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.654 159415 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.654 159415 DEBUG neutron.agent.ovn.metadata_agent [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.654 159415 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.654 159415 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.654 159415 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.654 159415 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.655 159415 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.655 159415 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.655 159415 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.655 159415 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.656 159415 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.656 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.656 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.656 159415 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.657 159415 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.657 159415 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.657 159415 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.657 159415 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.658 159415 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.658 159415 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.658 159415 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.658 159415 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.659 159415 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.659 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.659 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.659 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.660 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.660 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.660 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.660 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.661 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.661 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.661 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.661 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.661 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.662 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.662 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.662 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.663 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.663 159415 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.663 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.663 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.664 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.664 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.664 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.664 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.665 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.665 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.665 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.665 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.666 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.666 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.666 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.666 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.667 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.667 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.667 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.667 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.668 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.668 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.668 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.668 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.669 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.669 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.669 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.669 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.669 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.670 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.670 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.670 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.670 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.671 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.671 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.671 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.671 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.672 159415 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.672 159415 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.672 159415 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.672 159415 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.673 159415 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.673 159415 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.673 159415 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.673 159415 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.674 159415 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.674 159415 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.674 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.674 159415 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.675 159415 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.675 159415 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.675 159415 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.675 159415 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.676 159415 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.676 159415 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.676 159415 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.676 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.676 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.677 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.677 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.677 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.677 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.678 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.678 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.678 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.678 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.678 159415 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.679 159415 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.679 159415 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.679 159415 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.679 159415 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.680 159415 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.680 159415 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.680 159415 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.680 159415 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.681 159415 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.681 159415 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.681 159415 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.681 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.681 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.682 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.682 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.682 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.682 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.683 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.683 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.683 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.683 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.683 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.684 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.684 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.684 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.684 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.685 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.685 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.685 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.685 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.685 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.685 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.686 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.686 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.686 159415 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.686 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.686 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.686 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.686 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.687 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.687 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.687 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.687 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.687 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.688 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.688 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.688 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.688 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.688 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.688 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.689 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.689 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.689 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.689 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.689 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.689 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.690 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.690 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.690 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.690 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.690 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.690 159415 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.691 159415 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.691 159415 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.691 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.691 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.691 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.691 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.692 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.692 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.692 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.692 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.692 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.692 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.693 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.693 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.693 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.693 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.693 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.693 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.693 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.694 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.694 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.694 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.694 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.694 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.695 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.695 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.695 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.695 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.695 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.695 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.696 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.696 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.696 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.696 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.696 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.696 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.696 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.697 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.697 159415 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.697 159415 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.749 159415 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.750 159415 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.750 159415 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.751 159415 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.751 159415 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.773 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name ade391ff-62a6-48e9-b6e8-1a8b190070d2 (UUID: ade391ff-62a6-48e9-b6e8-1a8b190070d2) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m Nov 23 04:21:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19090 DF PROTO=TCP SPT=44958 DPT=9105 SEQ=1965286602 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B722E0F0000000001030307) Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.818 159415 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.819 159415 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.820 159415 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.820 159415 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.823 159415 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.828 159415 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.842 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'ade391ff-62a6-48e9-b6e8-1a8b190070d2'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[], external_ids={'neutron:ovn-metadata-id': '25433171-8677-59e7-872a-7bd1996152c9', 'neutron:ovn-metadata-sb-cfg': '1'}, name=ade391ff-62a6-48e9-b6e8-1a8b190070d2, nb_cfg_timestamp=1763889612504, nb_cfg=4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.844 159415 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.845 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.846 159415 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.846 159415 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.847 159415 INFO oslo_service.service [-] Starting 1 workers#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.851 159415 DEBUG oslo_service.service [-] Started child 159516 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.854 159415 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpvny93y7z/privsep.sock']#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.856 159516 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-190877'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.884 159516 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.885 159516 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.885 159516 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.887 159516 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.889 159516 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Nov 23 04:21:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:09.903 159516 INFO eventlet.wsgi.server [-] (159516) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m Nov 23 04:21:10 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:10.484 159415 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Nov 23 04:21:10 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:10.485 159415 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpvny93y7z/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Nov 23 04:21:10 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:10.356 159521 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Nov 23 04:21:10 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:10.362 159521 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Nov 23 04:21:10 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:10.366 159521 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Nov 23 04:21:10 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:10.366 159521 INFO oslo.privsep.daemon [-] privsep daemon running as pid 159521#033[00m Nov 23 04:21:10 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:10.489 159521 DEBUG oslo.privsep.daemon [-] privsep: reply[d4001d31-99e1-4d51-9157-6ab2fec814fb]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:21:10 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:10.940 159521 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:21:10 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:10.941 159521 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:21:10 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:10.941 159521 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.432 159521 DEBUG oslo.privsep.daemon [-] privsep: reply[c2909659-ca4c-4aba-8115-2e99c69711e4]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.435 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, column=external_ids, values=({'neutron:ovn-metadata-id': '25433171-8677-59e7-872a-7bd1996152c9'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.436 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.437 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.453 159415 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.453 159415 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.453 159415 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.453 159415 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.453 159415 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.454 159415 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.454 159415 DEBUG oslo_service.service [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.454 159415 DEBUG oslo_service.service [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.454 159415 DEBUG oslo_service.service [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.455 159415 DEBUG oslo_service.service [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.455 159415 DEBUG oslo_service.service [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.455 159415 DEBUG oslo_service.service [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.455 159415 DEBUG oslo_service.service [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.455 159415 DEBUG oslo_service.service [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.456 159415 DEBUG oslo_service.service [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.456 159415 DEBUG oslo_service.service [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.456 159415 DEBUG oslo_service.service [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.457 159415 DEBUG oslo_service.service [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.457 159415 DEBUG oslo_service.service [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.457 159415 DEBUG oslo_service.service [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.457 159415 DEBUG oslo_service.service [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.457 159415 DEBUG oslo_service.service [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.458 159415 DEBUG oslo_service.service [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.458 159415 DEBUG oslo_service.service [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.458 159415 DEBUG oslo_service.service [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.458 159415 DEBUG oslo_service.service [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.459 159415 DEBUG oslo_service.service [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.459 159415 DEBUG oslo_service.service [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.459 159415 DEBUG oslo_service.service [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.459 159415 DEBUG oslo_service.service [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.460 159415 DEBUG oslo_service.service [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.460 159415 DEBUG oslo_service.service [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.460 159415 DEBUG oslo_service.service [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.460 159415 DEBUG oslo_service.service [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.461 159415 DEBUG oslo_service.service [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.461 159415 DEBUG oslo_service.service [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.461 159415 DEBUG oslo_service.service [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.461 159415 DEBUG oslo_service.service [-] host = np0005532584.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.462 159415 DEBUG oslo_service.service [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.462 159415 DEBUG oslo_service.service [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.462 159415 DEBUG oslo_service.service [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.462 159415 DEBUG oslo_service.service [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.463 159415 DEBUG oslo_service.service [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.463 159415 DEBUG oslo_service.service [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.463 159415 DEBUG oslo_service.service [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.463 159415 DEBUG oslo_service.service [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.463 159415 DEBUG oslo_service.service [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.464 159415 DEBUG oslo_service.service [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.464 159415 DEBUG oslo_service.service [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.464 159415 DEBUG oslo_service.service [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.464 159415 DEBUG oslo_service.service [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.465 159415 DEBUG oslo_service.service [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.465 159415 DEBUG oslo_service.service [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.465 159415 DEBUG oslo_service.service [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.465 159415 DEBUG oslo_service.service [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.465 159415 DEBUG oslo_service.service [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.466 159415 DEBUG oslo_service.service [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.466 159415 DEBUG oslo_service.service [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.466 159415 DEBUG oslo_service.service [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.466 159415 DEBUG oslo_service.service [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.466 159415 DEBUG oslo_service.service [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.467 159415 DEBUG oslo_service.service [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.467 159415 DEBUG oslo_service.service [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.467 159415 DEBUG oslo_service.service [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.467 159415 DEBUG oslo_service.service [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.468 159415 DEBUG oslo_service.service [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.468 159415 DEBUG oslo_service.service [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.468 159415 DEBUG oslo_service.service [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.468 159415 DEBUG oslo_service.service [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.468 159415 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.469 159415 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.469 159415 DEBUG oslo_service.service [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.469 159415 DEBUG oslo_service.service [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.469 159415 DEBUG oslo_service.service [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.469 159415 DEBUG oslo_service.service [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.470 159415 DEBUG oslo_service.service [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.470 159415 DEBUG oslo_service.service [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.470 159415 DEBUG oslo_service.service [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.470 159415 DEBUG oslo_service.service [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.471 159415 DEBUG oslo_service.service [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.471 159415 DEBUG oslo_service.service [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.471 159415 DEBUG oslo_service.service [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.471 159415 DEBUG oslo_service.service [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.471 159415 DEBUG oslo_service.service [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.472 159415 DEBUG oslo_service.service [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.472 159415 DEBUG oslo_service.service [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.472 159415 DEBUG oslo_service.service [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.472 159415 DEBUG oslo_service.service [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.472 159415 DEBUG oslo_service.service [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.473 159415 DEBUG oslo_service.service [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.473 159415 DEBUG oslo_service.service [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.473 159415 DEBUG oslo_service.service [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.473 159415 DEBUG oslo_service.service [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.474 159415 DEBUG oslo_service.service [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.474 159415 DEBUG oslo_service.service [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.474 159415 DEBUG oslo_service.service [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.474 159415 DEBUG oslo_service.service [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.474 159415 DEBUG oslo_service.service [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.475 159415 DEBUG oslo_service.service [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.475 159415 DEBUG oslo_service.service [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.475 159415 DEBUG oslo_service.service [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.475 159415 DEBUG oslo_service.service [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.475 159415 DEBUG oslo_service.service [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.476 159415 DEBUG oslo_service.service [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.476 159415 DEBUG oslo_service.service [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.476 159415 DEBUG oslo_service.service [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.476 159415 DEBUG oslo_service.service [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.476 159415 DEBUG oslo_service.service [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.477 159415 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.477 159415 DEBUG oslo_service.service [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.477 159415 DEBUG oslo_service.service [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.478 159415 DEBUG oslo_service.service [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.478 159415 DEBUG oslo_service.service [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.478 159415 DEBUG oslo_service.service [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.478 159415 DEBUG oslo_service.service [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.478 159415 DEBUG oslo_service.service [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.478 159415 DEBUG oslo_service.service [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.479 159415 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.479 159415 DEBUG oslo_service.service [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.479 159415 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.479 159415 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.479 159415 DEBUG oslo_service.service [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.479 159415 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.480 159415 DEBUG oslo_service.service [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.480 159415 DEBUG oslo_service.service [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.480 159415 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.480 159415 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.480 159415 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.480 159415 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.480 159415 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.480 159415 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.481 159415 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.481 159415 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.481 159415 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.481 159415 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.481 159415 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.481 159415 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.481 159415 DEBUG oslo_service.service [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.482 159415 DEBUG oslo_service.service [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.482 159415 DEBUG oslo_service.service [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.482 159415 DEBUG oslo_service.service [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.482 159415 DEBUG oslo_service.service [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.482 159415 DEBUG oslo_service.service [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.482 159415 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.482 159415 DEBUG oslo_service.service [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.483 159415 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.483 159415 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.483 159415 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.483 159415 DEBUG oslo_service.service [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.483 159415 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.483 159415 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.483 159415 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.483 159415 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.484 159415 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.484 159415 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.484 159415 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.484 159415 DEBUG oslo_service.service [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.484 159415 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.484 159415 DEBUG oslo_service.service [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.484 159415 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.484 159415 DEBUG oslo_service.service [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.485 159415 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.485 159415 DEBUG oslo_service.service [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.485 159415 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.485 159415 DEBUG oslo_service.service [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.485 159415 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.485 159415 DEBUG oslo_service.service [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.485 159415 DEBUG oslo_service.service [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.485 159415 DEBUG oslo_service.service [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.486 159415 DEBUG oslo_service.service [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.486 159415 DEBUG oslo_service.service [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.486 159415 DEBUG oslo_service.service [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.486 159415 DEBUG oslo_service.service [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.486 159415 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.486 159415 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.486 159415 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.487 159415 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.487 159415 DEBUG oslo_service.service [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.487 159415 DEBUG oslo_service.service [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.487 159415 DEBUG oslo_service.service [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.487 159415 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.487 159415 DEBUG oslo_service.service [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.487 159415 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.488 159415 DEBUG oslo_service.service [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.488 159415 DEBUG oslo_service.service [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.488 159415 DEBUG oslo_service.service [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.488 159415 DEBUG oslo_service.service [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.488 159415 DEBUG oslo_service.service [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.488 159415 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.489 159415 DEBUG oslo_service.service [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.489 159415 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.489 159415 DEBUG oslo_service.service [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.489 159415 DEBUG oslo_service.service [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.489 159415 DEBUG oslo_service.service [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.489 159415 DEBUG oslo_service.service [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.489 159415 DEBUG oslo_service.service [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.490 159415 DEBUG oslo_service.service [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.490 159415 DEBUG oslo_service.service [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.490 159415 DEBUG oslo_service.service [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.490 159415 DEBUG oslo_service.service [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.490 159415 DEBUG oslo_service.service [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.490 159415 DEBUG oslo_service.service [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.490 159415 DEBUG oslo_service.service [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.491 159415 DEBUG oslo_service.service [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.491 159415 DEBUG oslo_service.service [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.491 159415 DEBUG oslo_service.service [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.491 159415 DEBUG oslo_service.service [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.491 159415 DEBUG oslo_service.service [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.491 159415 DEBUG oslo_service.service [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.491 159415 DEBUG oslo_service.service [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.491 159415 DEBUG oslo_service.service [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.492 159415 DEBUG oslo_service.service [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.492 159415 DEBUG oslo_service.service [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.492 159415 DEBUG oslo_service.service [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.492 159415 DEBUG oslo_service.service [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.492 159415 DEBUG oslo_service.service [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.492 159415 DEBUG oslo_service.service [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.492 159415 DEBUG oslo_service.service [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.493 159415 DEBUG oslo_service.service [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.493 159415 DEBUG oslo_service.service [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.493 159415 DEBUG oslo_service.service [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.493 159415 DEBUG oslo_service.service [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.493 159415 DEBUG oslo_service.service [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.493 159415 DEBUG oslo_service.service [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.493 159415 DEBUG oslo_service.service [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.493 159415 DEBUG oslo_service.service [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.494 159415 DEBUG oslo_service.service [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.494 159415 DEBUG oslo_service.service [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.494 159415 DEBUG oslo_service.service [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.494 159415 DEBUG oslo_service.service [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.494 159415 DEBUG oslo_service.service [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.494 159415 DEBUG oslo_service.service [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.494 159415 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.494 159415 DEBUG oslo_service.service [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.495 159415 DEBUG oslo_service.service [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.495 159415 DEBUG oslo_service.service [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.495 159415 DEBUG oslo_service.service [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.495 159415 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.495 159415 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.495 159415 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.495 159415 DEBUG oslo_service.service [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.496 159415 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.496 159415 DEBUG oslo_service.service [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.496 159415 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.496 159415 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.496 159415 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.496 159415 DEBUG oslo_service.service [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.496 159415 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.497 159415 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.497 159415 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.497 159415 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.497 159415 DEBUG oslo_service.service [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.497 159415 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.497 159415 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.497 159415 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.497 159415 DEBUG oslo_service.service [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.498 159415 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.498 159415 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.498 159415 DEBUG oslo_service.service [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.498 159415 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.498 159415 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.498 159415 DEBUG oslo_service.service [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.498 159415 DEBUG oslo_service.service [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.499 159415 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.499 159415 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.499 159415 DEBUG oslo_service.service [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.499 159415 DEBUG oslo_service.service [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.499 159415 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.499 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.499 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.500 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.500 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.500 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.500 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.500 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.500 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.500 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.501 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.501 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.501 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.501 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.501 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.501 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.501 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.502 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.502 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.502 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.502 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.502 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.502 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.502 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.502 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.503 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.503 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.503 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.503 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.503 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.503 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.503 159415 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.504 159415 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.504 159415 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.504 159415 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.504 159415 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:21:11 localhost ovn_metadata_agent[159410]: 2025-11-23 09:21:11.504 159415 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Nov 23 04:21:13 localhost sshd[159526]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:21:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19091 DF PROTO=TCP SPT=44958 DPT=9105 SEQ=1965286602 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B723DCF0000000001030307) Nov 23 04:21:13 localhost systemd-logind[760]: New session 52 of user zuul. Nov 23 04:21:13 localhost systemd[1]: Started Session 52 of User zuul. Nov 23 04:21:15 localhost python3.9[159619]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:21:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22372 DF PROTO=TCP SPT=55366 DPT=9882 SEQ=2726065783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B724A720000000001030307) Nov 23 04:21:17 localhost python3.9[159715]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:21:18 localhost python3.9[159820]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:21:18 localhost systemd[1]: libpod-bb1d23d516a56617282b2c2fb64be37968a6a8c8b5f0fae5a4a643cb6d0f63a3.scope: Deactivated successfully. Nov 23 04:21:18 localhost podman[159821]: 2025-11-23 09:21:18.45424722 +0000 UTC m=+0.088130772 container died bb1d23d516a56617282b2c2fb64be37968a6a8c8b5f0fae5a4a643cb6d0f63a3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, version=17.1.12, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 23 04:21:18 localhost podman[159821]: 2025-11-23 09:21:18.49121598 +0000 UTC m=+0.125099532 container cleanup bb1d23d516a56617282b2c2fb64be37968a6a8c8b5f0fae5a4a643cb6d0f63a3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Nov 23 04:21:18 localhost podman[159836]: 2025-11-23 09:21:18.548132924 +0000 UTC m=+0.081046317 container remove bb1d23d516a56617282b2c2fb64be37968a6a8c8b5f0fae5a4a643cb6d0f63a3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64) Nov 23 04:21:18 localhost systemd[1]: libpod-conmon-bb1d23d516a56617282b2c2fb64be37968a6a8c8b5f0fae5a4a643cb6d0f63a3.scope: Deactivated successfully. Nov 23 04:21:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37967 DF PROTO=TCP SPT=60424 DPT=9882 SEQ=1285668424 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7252100000000001030307) Nov 23 04:21:19 localhost systemd[1]: tmp-crun.NDxd3S.mount: Deactivated successfully. Nov 23 04:21:19 localhost systemd[1]: var-lib-containers-storage-overlay-76c7afe789f8282c8bf955cc2a3d04c1e3de5492a4e87c0d5c8697c6c76b2000-merged.mount: Deactivated successfully. Nov 23 04:21:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb1d23d516a56617282b2c2fb64be37968a6a8c8b5f0fae5a4a643cb6d0f63a3-userdata-shm.mount: Deactivated successfully. Nov 23 04:21:20 localhost python3.9[159941]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:21:20 localhost systemd[1]: Reloading. Nov 23 04:21:20 localhost systemd-rc-local-generator[159966]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:21:20 localhost systemd-sysv-generator[159969]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:21:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:21:21 localhost python3.9[160066]: ansible-ansible.builtin.service_facts Invoked Nov 23 04:21:21 localhost network[160083]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 23 04:21:21 localhost network[160084]: 'network-scripts' will be removed from distribution in near future. Nov 23 04:21:21 localhost network[160085]: It is advised to switch to 'NetworkManager' instead for network management. Nov 23 04:21:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31287 DF PROTO=TCP SPT=33210 DPT=9101 SEQ=3927835828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B725E0F0000000001030307) Nov 23 04:21:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:21:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43432 DF PROTO=TCP SPT=36634 DPT=9102 SEQ=442263159 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B726F100000000001030307) Nov 23 04:21:29 localhost python3.9[160363]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:21:29 localhost systemd[1]: Reloading. Nov 23 04:21:30 localhost systemd-rc-local-generator[160390]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:21:30 localhost systemd-sysv-generator[160393]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:21:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:21:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45011 DF PROTO=TCP SPT=43002 DPT=9100 SEQ=3447905834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B727D9E0000000001030307) Nov 23 04:21:30 localhost systemd[1]: Stopped target tripleo_nova_libvirt.target. Nov 23 04:21:31 localhost python3.9[160495]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:21:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45012 DF PROTO=TCP SPT=43002 DPT=9100 SEQ=3447905834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B72818F0000000001030307) Nov 23 04:21:32 localhost python3.9[160588]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:21:33 localhost python3.9[160681]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:21:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19186 DF PROTO=TCP SPT=45408 DPT=9100 SEQ=4027144144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B728E0F0000000001030307) Nov 23 04:21:35 localhost python3.9[160774]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:21:35 localhost python3.9[160867]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:21:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:21:36 localhost podman[160961]: 2025-11-23 09:21:36.486388836 +0000 UTC m=+0.099337453 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:21:36 localhost podman[160961]: 2025-11-23 09:21:36.537394784 +0000 UTC m=+0.150343361 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:21:36 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:21:36 localhost python3.9[160960]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:21:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45014 DF PROTO=TCP SPT=43002 DPT=9100 SEQ=3447905834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B72994F0000000001030307) Nov 23 04:21:37 localhost python3.9[161077]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:21:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:21:38 localhost systemd[1]: tmp-crun.dXZWCk.mount: Deactivated successfully. Nov 23 04:21:38 localhost podman[161170]: 2025-11-23 09:21:38.337402338 +0000 UTC m=+0.096932914 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:21:38 localhost podman[161170]: 2025-11-23 09:21:38.347363701 +0000 UTC m=+0.106894267 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:21:38 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:21:38 localhost python3.9[161169]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:21:39 localhost python3.9[161280]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:21:39 localhost python3.9[161372]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:21:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22571 DF PROTO=TCP SPT=42382 DPT=9105 SEQ=2707159102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B72A34F0000000001030307) Nov 23 04:21:40 localhost python3.9[161464]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:21:40 localhost python3.9[161556]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:21:41 localhost python3.9[161648]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:21:42 localhost python3.9[161740]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:21:42 localhost python3.9[161832]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:21:43 localhost python3.9[161924]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:21:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22572 DF PROTO=TCP SPT=42382 DPT=9105 SEQ=2707159102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B72B30F0000000001030307) Nov 23 04:21:44 localhost python3.9[162017]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:21:45 localhost python3.9[162109]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:21:45 localhost python3.9[162201]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:21:47 localhost python3.9[162293]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:21:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54582 DF PROTO=TCP SPT=33582 DPT=9882 SEQ=1246123830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B72BFA30000000001030307) Nov 23 04:21:47 localhost python3.9[162385]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:21:48 localhost python3.9[162477]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 23 04:21:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22377 DF PROTO=TCP SPT=55366 DPT=9882 SEQ=2726065783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B72C60F0000000001030307) Nov 23 04:21:49 localhost python3.9[162569]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:21:49 localhost systemd[1]: Reloading. Nov 23 04:21:49 localhost systemd-rc-local-generator[162597]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:21:49 localhost systemd-sysv-generator[162600]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:21:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:21:50 localhost python3.9[162697]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:21:52 localhost python3.9[162790]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:21:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40479 DF PROTO=TCP SPT=60744 DPT=9101 SEQ=2767863838 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B72D40F0000000001030307) Nov 23 04:21:52 localhost python3.9[162883]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:21:53 localhost python3.9[162976]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:21:53 localhost sshd[162978]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:21:54 localhost python3.9[163071]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:21:55 localhost python3.9[163164]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:21:55 localhost python3.9[163257]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:21:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33836 DF PROTO=TCP SPT=38474 DPT=9102 SEQ=954153526 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B72E44F0000000001030307) Nov 23 04:21:57 localhost python3.9[163350]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None Nov 23 04:21:58 localhost python3.9[163443]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Nov 23 04:21:58 localhost systemd-journald[47422]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 76.6 (255 of 333 items), suggesting rotation. Nov 23 04:21:58 localhost systemd-journald[47422]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 23 04:21:58 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 04:21:59 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 04:21:59 localhost python3.9[163542]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005532584.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Nov 23 04:22:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13868 DF PROTO=TCP SPT=35284 DPT=9100 SEQ=2333831607 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B72F2CD0000000001030307) Nov 23 04:22:01 localhost python3.9[163642]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 04:22:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13869 DF PROTO=TCP SPT=35284 DPT=9100 SEQ=2333831607 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B72F6CF0000000001030307) Nov 23 04:22:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 04:22:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 5370 writes, 23K keys, 5370 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5370 writes, 735 syncs, 7.31 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f46733610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f46733610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Nov 23 04:22:02 localhost python3.9[163696]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:22:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31257 DF PROTO=TCP SPT=50912 DPT=9100 SEQ=1126995622 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B73020F0000000001030307) Nov 23 04:22:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 04:22:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 5205 writes, 23K keys, 5205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5205 writes, 665 syncs, 7.83 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56035b7962d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 9.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56035b7962d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 9.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Nov 23 04:22:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:22:06 localhost podman[163747]: 2025-11-23 09:22:06.919567567 +0000 UTC m=+0.095768202 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:22:06 localhost podman[163747]: 2025-11-23 09:22:06.991427067 +0000 UTC m=+0.167627702 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 04:22:07 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:22:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13871 DF PROTO=TCP SPT=35284 DPT=9100 SEQ=2333831607 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B730E900000000001030307) Nov 23 04:22:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:22:08 localhost podman[163791]: 2025-11-23 09:22:08.903053682 +0000 UTC m=+0.089115043 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS) Nov 23 04:22:08 localhost podman[163791]: 2025-11-23 09:22:08.9396176 +0000 UTC m=+0.125679001 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0) Nov 23 04:22:08 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:22:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:22:09.700 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:22:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:22:09.701 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:22:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:22:09.701 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:22:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3391 DF PROTO=TCP SPT=44082 DPT=9105 SEQ=3051524722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B73184F0000000001030307) Nov 23 04:22:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3392 DF PROTO=TCP SPT=44082 DPT=9105 SEQ=3051524722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7328100000000001030307) Nov 23 04:22:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62108 DF PROTO=TCP SPT=38658 DPT=9882 SEQ=3479646938 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7334D30000000001030307) Nov 23 04:22:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54587 DF PROTO=TCP SPT=33582 DPT=9882 SEQ=1246123830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B733C100000000001030307) Nov 23 04:22:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3393 DF PROTO=TCP SPT=44082 DPT=9105 SEQ=3051524722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7348100000000001030307) Nov 23 04:22:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55156 DF PROTO=TCP SPT=52298 DPT=9102 SEQ=3348968598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B73598F0000000001030307) Nov 23 04:22:27 localhost kernel: SELinux: Converting 2746 SID table entries... Nov 23 04:22:27 localhost kernel: SELinux: Context system_u:object_r:insights_client_cache_t:s0 became invalid (unmapped). Nov 23 04:22:27 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 04:22:27 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 04:22:27 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 04:22:27 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 04:22:27 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 04:22:27 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 04:22:27 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 04:22:28 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=19 res=1 Nov 23 04:22:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54647 DF PROTO=TCP SPT=49378 DPT=9100 SEQ=1716083917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7367FD0000000001030307) Nov 23 04:22:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54648 DF PROTO=TCP SPT=49378 DPT=9100 SEQ=1716083917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B736C0F0000000001030307) Nov 23 04:22:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45017 DF PROTO=TCP SPT=43002 DPT=9100 SEQ=3447905834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B73780F0000000001030307) Nov 23 04:22:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54650 DF PROTO=TCP SPT=49378 DPT=9100 SEQ=1716083917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7383CF0000000001030307) Nov 23 04:22:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:22:38 localhost systemd[1]: tmp-crun.iZEdSJ.mount: Deactivated successfully. Nov 23 04:22:38 localhost podman[164906]: 2025-11-23 09:22:38.094672115 +0000 UTC m=+0.269821442 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true) Nov 23 04:22:38 localhost podman[164906]: 2025-11-23 09:22:38.14960122 +0000 UTC m=+0.324750587 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Nov 23 04:22:38 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:22:38 localhost kernel: SELinux: Converting 2749 SID table entries... Nov 23 04:22:38 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 04:22:38 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 04:22:38 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 04:22:38 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 04:22:38 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 04:22:38 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 04:22:38 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 04:22:39 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=20 res=1 Nov 23 04:22:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5047 DF PROTO=TCP SPT=51620 DPT=9105 SEQ=1168554921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B738D8F0000000001030307) Nov 23 04:22:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:22:39 localhost systemd[1]: tmp-crun.KrAa0R.mount: Deactivated successfully. Nov 23 04:22:39 localhost podman[164940]: 2025-11-23 09:22:39.913150581 +0000 UTC m=+0.091331221 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:22:39 localhost podman[164940]: 2025-11-23 09:22:39.943628102 +0000 UTC m=+0.121808742 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 23 04:22:39 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:22:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5048 DF PROTO=TCP SPT=51620 DPT=9105 SEQ=1168554921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B739D4F0000000001030307) Nov 23 04:22:43 localhost sshd[164960]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:22:47 localhost kernel: SELinux: Converting 2749 SID table entries... Nov 23 04:22:47 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 04:22:47 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 04:22:47 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 04:22:47 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 04:22:47 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 04:22:47 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 04:22:47 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 04:22:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11385 DF PROTO=TCP SPT=51846 DPT=9882 SEQ=1395605384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B73AA030000000001030307) Nov 23 04:22:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60683 DF PROTO=TCP SPT=36312 DPT=9102 SEQ=3001127679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B73B2E40000000001030307) Nov 23 04:22:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5049 DF PROTO=TCP SPT=51620 DPT=9105 SEQ=1168554921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B73BE0F0000000001030307) Nov 23 04:22:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21847 DF PROTO=TCP SPT=60386 DPT=9101 SEQ=135474944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B73C80F0000000001030307) Nov 23 04:22:57 localhost kernel: SELinux: Converting 2749 SID table entries... Nov 23 04:22:57 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 04:22:57 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 04:22:57 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 04:22:57 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 04:22:57 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 04:22:57 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 04:22:57 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 04:23:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41754 DF PROTO=TCP SPT=38584 DPT=9100 SEQ=2666638986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B73DD2E0000000001030307) Nov 23 04:23:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41755 DF PROTO=TCP SPT=38584 DPT=9100 SEQ=2666638986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B73E14F0000000001030307) Nov 23 04:23:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13874 DF PROTO=TCP SPT=35284 DPT=9100 SEQ=2333831607 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B73EC100000000001030307) Nov 23 04:23:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41757 DF PROTO=TCP SPT=38584 DPT=9100 SEQ=2666638986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B73F90F0000000001030307) Nov 23 04:23:08 localhost kernel: SELinux: Converting 2749 SID table entries... Nov 23 04:23:08 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 04:23:08 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 04:23:08 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 04:23:08 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 04:23:08 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 04:23:08 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 04:23:08 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 04:23:08 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=23 res=1 Nov 23 04:23:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:23:08 localhost podman[164989]: 2025-11-23 09:23:08.908457576 +0000 UTC m=+0.079595534 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Nov 23 04:23:09 localhost podman[164989]: 2025-11-23 09:23:09.002872174 +0000 UTC m=+0.174010152 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Nov 23 04:23:09 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:23:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:23:09.701 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:23:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:23:09.702 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:23:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:23:09.702 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:23:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42748 DF PROTO=TCP SPT=36196 DPT=9105 SEQ=2247497652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7402CF0000000001030307) Nov 23 04:23:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:23:10 localhost systemd[1]: tmp-crun.ilGyoB.mount: Deactivated successfully. Nov 23 04:23:10 localhost podman[165016]: 2025-11-23 09:23:10.897872089 +0000 UTC m=+0.087550244 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:23:10 localhost podman[165016]: 2025-11-23 09:23:10.927634807 +0000 UTC m=+0.117312972 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 04:23:10 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:23:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42749 DF PROTO=TCP SPT=36196 DPT=9105 SEQ=2247497652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7412900000000001030307) Nov 23 04:23:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41758 DF PROTO=TCP SPT=38584 DPT=9100 SEQ=2666638986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B741A100000000001030307) Nov 23 04:23:17 localhost kernel: SELinux: Converting 2749 SID table entries... Nov 23 04:23:17 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 04:23:17 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 04:23:17 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 04:23:17 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 04:23:17 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 04:23:17 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 04:23:17 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 04:23:17 localhost systemd[1]: Reloading. Nov 23 04:23:17 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=24 res=1 Nov 23 04:23:18 localhost systemd-rc-local-generator[165065]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:23:18 localhost systemd-sysv-generator[165071]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:23:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:23:18 localhost systemd[1]: Reloading. Nov 23 04:23:18 localhost systemd-sysv-generator[165109]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:23:18 localhost systemd-rc-local-generator[165106]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:23:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:23:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11390 DF PROTO=TCP SPT=51846 DPT=9882 SEQ=1395605384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B74260F0000000001030307) Nov 23 04:23:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42750 DF PROTO=TCP SPT=36196 DPT=9105 SEQ=2247497652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7432100000000001030307) Nov 23 04:23:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48296 DF PROTO=TCP SPT=43430 DPT=9102 SEQ=743460663 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7443CF0000000001030307) Nov 23 04:23:28 localhost kernel: SELinux: Converting 2750 SID table entries... Nov 23 04:23:28 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 23 04:23:28 localhost kernel: SELinux: policy capability open_perms=1 Nov 23 04:23:28 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 23 04:23:28 localhost kernel: SELinux: policy capability always_check_network=0 Nov 23 04:23:28 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 04:23:28 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 04:23:28 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 23 04:23:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41767 DF PROTO=TCP SPT=43746 DPT=9100 SEQ=27133464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B74525E0000000001030307) Nov 23 04:23:30 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=25 res=1 Nov 23 04:23:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41768 DF PROTO=TCP SPT=43746 DPT=9100 SEQ=27133464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7456500000000001030307) Nov 23 04:23:32 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Nov 23 04:23:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54653 DF PROTO=TCP SPT=49378 DPT=9100 SEQ=1716083917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B74620F0000000001030307) Nov 23 04:23:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41770 DF PROTO=TCP SPT=43746 DPT=9100 SEQ=27133464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B746E100000000001030307) Nov 23 04:23:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:23:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47537 DF PROTO=TCP SPT=35382 DPT=9105 SEQ=912805957 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B74780F0000000001030307) Nov 23 04:23:39 localhost systemd[1]: tmp-crun.BMz6tq.mount: Deactivated successfully. Nov 23 04:23:39 localhost podman[165277]: 2025-11-23 09:23:39.921348613 +0000 UTC m=+0.098828645 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller) Nov 23 04:23:39 localhost podman[165277]: 2025-11-23 09:23:39.963779037 +0000 UTC m=+0.141259129 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118) Nov 23 04:23:39 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:23:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:23:41 localhost systemd[1]: tmp-crun.MAfXi5.mount: Deactivated successfully. Nov 23 04:23:41 localhost podman[165302]: 2025-11-23 09:23:41.902375162 +0000 UTC m=+0.085769252 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 23 04:23:41 localhost podman[165302]: 2025-11-23 09:23:41.911627104 +0000 UTC m=+0.095021214 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118) Nov 23 04:23:41 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:23:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47538 DF PROTO=TCP SPT=35382 DPT=9105 SEQ=912805957 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7487CF0000000001030307) Nov 23 04:23:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13496 DF PROTO=TCP SPT=43042 DPT=9882 SEQ=290562359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7494630000000001030307) Nov 23 04:23:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19719 DF PROTO=TCP SPT=38416 DPT=9882 SEQ=3422035737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B749C0F0000000001030307) Nov 23 04:23:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15529 DF PROTO=TCP SPT=53566 DPT=9101 SEQ=620971729 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B74A80F0000000001030307) Nov 23 04:23:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54990 DF PROTO=TCP SPT=39254 DPT=9102 SEQ=4136009399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B74B90F0000000001030307) Nov 23 04:24:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19780 DF PROTO=TCP SPT=59858 DPT=9100 SEQ=674189289 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B74C78D0000000001030307) Nov 23 04:24:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19781 DF PROTO=TCP SPT=59858 DPT=9100 SEQ=674189289 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B74CB8F0000000001030307) Nov 23 04:24:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41760 DF PROTO=TCP SPT=38584 DPT=9100 SEQ=2666638986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B74D80F0000000001030307) Nov 23 04:24:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19783 DF PROTO=TCP SPT=59858 DPT=9100 SEQ=674189289 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B74E3500000000001030307) Nov 23 04:24:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:24:09.704 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:24:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:24:09.707 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:24:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:24:09.707 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:24:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1985 DF PROTO=TCP SPT=50914 DPT=9105 SEQ=1183392462 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B74ED0F0000000001030307) Nov 23 04:24:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:24:10 localhost podman[182308]: 2025-11-23 09:24:10.91261639 +0000 UTC m=+0.096317106 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:24:10 localhost systemd[1]: tmp-crun.3gpVTa.mount: Deactivated successfully. Nov 23 04:24:11 localhost podman[182308]: 2025-11-23 09:24:11.01739 +0000 UTC m=+0.201090696 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:24:11 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:24:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:24:12 localhost podman[182431]: 2025-11-23 09:24:12.045711979 +0000 UTC m=+0.090487341 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 04:24:12 localhost podman[182431]: 2025-11-23 09:24:12.080351814 +0000 UTC m=+0.125127156 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible) Nov 23 04:24:12 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:24:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1986 DF PROTO=TCP SPT=50914 DPT=9105 SEQ=1183392462 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B74FCD00000000001030307) Nov 23 04:24:15 localhost systemd[1]: Stopping OpenSSH server daemon... Nov 23 04:24:15 localhost systemd[1]: sshd.service: Deactivated successfully. Nov 23 04:24:15 localhost systemd[1]: Stopped OpenSSH server daemon. Nov 23 04:24:15 localhost systemd[1]: sshd.service: Consumed 1.267s CPU time, read 32.0K from disk, written 0B to disk. Nov 23 04:24:15 localhost systemd[1]: Stopped target sshd-keygen.target. Nov 23 04:24:15 localhost systemd[1]: Stopping sshd-keygen.target... Nov 23 04:24:15 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 23 04:24:15 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 23 04:24:15 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 23 04:24:15 localhost systemd[1]: Reached target sshd-keygen.target. Nov 23 04:24:15 localhost systemd[1]: Starting OpenSSH server daemon... Nov 23 04:24:15 localhost sshd[183089]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:24:15 localhost systemd[1]: Started OpenSSH server daemon. Nov 23 04:24:15 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:15 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:15 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:15 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:15 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:15 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:15 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:16 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:16 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:16 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:16 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:16 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:16 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:16 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:16 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:16 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:16 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:16 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:16 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:16 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:16 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:16 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:16 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29991 DF PROTO=TCP SPT=53640 DPT=9882 SEQ=1884596257 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7509930000000001030307) Nov 23 04:24:17 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 04:24:17 localhost systemd[1]: Starting man-db-cache-update.service... Nov 23 04:24:17 localhost systemd[1]: Reloading. Nov 23 04:24:17 localhost systemd-rc-local-generator[183318]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:24:17 localhost systemd-sysv-generator[183323]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:24:17 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:17 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:24:17 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:17 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:17 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:17 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:17 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:17 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:17 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:17 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 23 04:24:17 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 04:24:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13501 DF PROTO=TCP SPT=43042 DPT=9882 SEQ=290562359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B75100F0000000001030307) Nov 23 04:24:21 localhost python3.9[187413]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 23 04:24:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1987 DF PROTO=TCP SPT=50914 DPT=9105 SEQ=1183392462 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B751E0F0000000001030307) Nov 23 04:24:22 localhost systemd[1]: Reloading. Nov 23 04:24:22 localhost systemd-sysv-generator[188555]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:24:22 localhost systemd-rc-local-generator[188546]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:24 localhost python3.9[189349]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 23 04:24:24 localhost systemd[1]: Reloading. Nov 23 04:24:24 localhost systemd-sysv-generator[189545]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:24:24 localhost systemd-rc-local-generator[189541]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:24:24 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:24:24 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:24 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:24 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:24 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:24 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:24 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:24 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:25 localhost python3.9[189909]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 23 04:24:25 localhost systemd[1]: Reloading. Nov 23 04:24:25 localhost systemd-rc-local-generator[190097]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:24:25 localhost systemd-sysv-generator[190101]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44139 DF PROTO=TCP SPT=38266 DPT=9102 SEQ=1199801074 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B752E4F0000000001030307) Nov 23 04:24:26 localhost python3.9[190523]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 23 04:24:26 localhost systemd[1]: Reloading. Nov 23 04:24:27 localhost systemd-sysv-generator[190760]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:24:27 localhost systemd-rc-local-generator[190757]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:28 localhost python3.9[191179]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:28 localhost systemd[1]: Reloading. Nov 23 04:24:28 localhost systemd-rc-local-generator[191422]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:24:28 localhost systemd-sysv-generator[191425]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:29 localhost python3.9[191808]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:29 localhost systemd[1]: Reloading. Nov 23 04:24:29 localhost systemd-rc-local-generator[192044]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:24:29 localhost systemd-sysv-generator[192050]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27629 DF PROTO=TCP SPT=46828 DPT=9100 SEQ=3601523799 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B753CBD0000000001030307) Nov 23 04:24:30 localhost python3.9[192473]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:30 localhost systemd[1]: Reloading. Nov 23 04:24:30 localhost systemd-rc-local-generator[192686]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:24:30 localhost systemd-sysv-generator[192691]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:31 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 23 04:24:31 localhost systemd[1]: Finished man-db-cache-update.service. Nov 23 04:24:31 localhost systemd[1]: man-db-cache-update.service: Consumed 16.418s CPU time. Nov 23 04:24:31 localhost systemd[1]: run-rf343cb4b788a4d02a1fccdaeeda925d9.service: Deactivated successfully. Nov 23 04:24:31 localhost systemd[1]: run-r0f4d09f67c244c6897cdee6ce96af4fe.service: Deactivated successfully. Nov 23 04:24:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27630 DF PROTO=TCP SPT=46828 DPT=9100 SEQ=3601523799 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7540CF0000000001030307) Nov 23 04:24:31 localhost python3.9[192940]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:32 localhost python3.9[193053]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:32 localhost systemd[1]: Reloading. Nov 23 04:24:32 localhost systemd-rc-local-generator[193085]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:24:32 localhost systemd-sysv-generator[193088]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:33 localhost python3.9[193201]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 23 04:24:33 localhost systemd[1]: Reloading. Nov 23 04:24:33 localhost systemd-rc-local-generator[193243]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:24:33 localhost systemd-sysv-generator[193249]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:24:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41773 DF PROTO=TCP SPT=43746 DPT=9100 SEQ=27133464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B754C0F0000000001030307) Nov 23 04:24:34 localhost python3.9[193404]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:36 localhost python3.9[193549]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:37 localhost python3.9[193662]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27632 DF PROTO=TCP SPT=46828 DPT=9100 SEQ=3601523799 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B75588F0000000001030307) Nov 23 04:24:38 localhost python3.9[193775]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:39 localhost python3.9[193888]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10274 DF PROTO=TCP SPT=37900 DPT=9105 SEQ=1168994282 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B75624F0000000001030307) Nov 23 04:24:41 localhost python3.9[194001]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:24:41 localhost podman[194003]: 2025-11-23 09:24:41.352188599 +0000 UTC m=+0.094102216 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:24:41 localhost podman[194003]: 2025-11-23 09:24:41.448411619 +0000 UTC m=+0.190325146 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 04:24:41 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:24:42 localhost python3.9[194139]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:24:42 localhost podman[194142]: 2025-11-23 09:24:42.209452221 +0000 UTC m=+0.076916712 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 04:24:42 localhost podman[194142]: 2025-11-23 09:24:42.245611273 +0000 UTC m=+0.113075724 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Nov 23 04:24:42 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:24:42 localhost python3.9[194269]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:43 localhost python3.9[194382]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10275 DF PROTO=TCP SPT=37900 DPT=9105 SEQ=1168994282 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B75720F0000000001030307) Nov 23 04:24:44 localhost python3.9[194495]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:45 localhost python3.9[194608]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:46 localhost python3.9[194721]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34857 DF PROTO=TCP SPT=48654 DPT=9882 SEQ=2939326056 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B757EC30000000001030307) Nov 23 04:24:47 localhost python3.9[194834]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29996 DF PROTO=TCP SPT=53640 DPT=9882 SEQ=1884596257 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B75860F0000000001030307) Nov 23 04:24:49 localhost python3.9[194947]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 23 04:24:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10276 DF PROTO=TCP SPT=37900 DPT=9105 SEQ=1168994282 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B75920F0000000001030307) Nov 23 04:24:52 localhost python3.9[195060]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 23 04:24:52 localhost python3.9[195170]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 23 04:24:53 localhost python3.9[195280]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:24:53 localhost python3.9[195390]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:24:54 localhost python3.9[195500]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:24:55 localhost python3.9[195610]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 23 04:24:56 localhost python3.9[195720]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:24:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32966 DF PROTO=TCP SPT=59548 DPT=9102 SEQ=926149942 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B75A34F0000000001030307) Nov 23 04:24:56 localhost python3.9[195810]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763889895.4830735-1643-46127123528234/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:24:57 localhost python3.9[195920]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:24:57 localhost python3.9[196010]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763889896.9556363-1643-52508888046754/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:24:58 localhost python3.9[196120]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:24:59 localhost python3.9[196210]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763889898.1084309-1643-59487645247820/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:24:59 localhost python3.9[196320]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17288 DF PROTO=TCP SPT=57514 DPT=9100 SEQ=855113025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B75B1EE0000000001030307) Nov 23 04:25:00 localhost python3.9[196410]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763889899.2333505-1643-263320772088846/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:00 localhost python3.9[196520]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17289 DF PROTO=TCP SPT=57514 DPT=9100 SEQ=855113025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B75B60F0000000001030307) Nov 23 04:25:01 localhost python3.9[196610]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763889900.3673575-1643-87045996243931/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:02 localhost python3.9[196720]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:03 localhost python3.9[196810]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763889901.5581627-1643-240397681910058/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:03 localhost python3.9[196920]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19786 DF PROTO=TCP SPT=59858 DPT=9100 SEQ=674189289 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B75C20F0000000001030307) Nov 23 04:25:04 localhost python3.9[197008]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763889903.3568494-1643-263885272139126/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:05 localhost python3.9[197118]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:06 localhost python3.9[197208]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763889905.060147-1643-157353975887482/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17291 DF PROTO=TCP SPT=57514 DPT=9100 SEQ=855113025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B75CDD00000000001030307) Nov 23 04:25:07 localhost python3.9[197318]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:08 localhost python3.9[197428]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:09 localhost python3.9[197538]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:25:09.705 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:25:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:25:09.706 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:25:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:25:09.707 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:25:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32119 DF PROTO=TCP SPT=49502 DPT=9105 SEQ=1406364887 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B75D78F0000000001030307) Nov 23 04:25:10 localhost python3.9[197648]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:10 localhost python3.9[197758]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:11 localhost python3.9[197868]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:25:11 localhost podman[197979]: 2025-11-23 09:25:11.804408229 +0000 UTC m=+0.081984183 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118) Nov 23 04:25:11 localhost podman[197979]: 2025-11-23 09:25:11.870401412 +0000 UTC m=+0.147977316 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS) Nov 23 04:25:11 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:25:11 localhost python3.9[197978]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:25:12 localhost systemd[1]: tmp-crun.fOvC1V.mount: Deactivated successfully. Nov 23 04:25:12 localhost podman[198114]: 2025-11-23 09:25:12.437807059 +0000 UTC m=+0.092726720 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 23 04:25:12 localhost podman[198114]: 2025-11-23 09:25:12.469664084 +0000 UTC m=+0.124583725 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Nov 23 04:25:12 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:25:12 localhost python3.9[198113]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:13 localhost python3.9[198239]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:13 localhost python3.9[198349]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32120 DF PROTO=TCP SPT=49502 DPT=9105 SEQ=1406364887 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B75E7500000000001030307) Nov 23 04:25:14 localhost python3.9[198459]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:14 localhost python3.9[198569]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:15 localhost python3.9[198679]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:16 localhost python3.9[198789]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:16 localhost python3.9[198899]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5959 DF PROTO=TCP SPT=45034 DPT=9882 SEQ=2549155682 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B75F3F30000000001030307) Nov 23 04:25:18 localhost python3.9[199009]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:19 localhost python3.9[199097]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889918.3614597-2306-25226689010660/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19591 DF PROTO=TCP SPT=58882 DPT=9102 SEQ=2483554525 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B75FCD40000000001030307) Nov 23 04:25:19 localhost python3.9[199207]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:21 localhost python3.9[199295]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889919.484731-2306-144848412045794/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:21 localhost python3.9[199405]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32121 DF PROTO=TCP SPT=49502 DPT=9105 SEQ=1406364887 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B76080F0000000001030307) Nov 23 04:25:22 localhost python3.9[199493]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889921.2423236-2306-43781193943929/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:22 localhost python3.9[199603]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:23 localhost python3.9[199691]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889922.3731744-2306-118711162435676/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:24 localhost python3.9[199801]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:24 localhost python3.9[199889]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889923.5817437-2306-133598533188551/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33360 DF PROTO=TCP SPT=42626 DPT=9101 SEQ=418340947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7612100000000001030307) Nov 23 04:25:25 localhost python3.9[199999]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:25 localhost python3.9[200087]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889924.7590718-2306-852468179007/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:26 localhost python3.9[200197]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:26 localhost python3.9[200285]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889925.9370937-2306-23955657750060/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:27 localhost python3.9[200395]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:28 localhost python3.9[200483]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889927.1406868-2306-221245730065295/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:28 localhost python3.9[200593]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:29 localhost python3.9[200681]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889928.5011983-2306-11720800145749/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:30 localhost python3.9[200791]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37203 DF PROTO=TCP SPT=37494 DPT=9100 SEQ=1930480195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7627200000000001030307) Nov 23 04:25:30 localhost python3.9[200879]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889929.6099968-2306-120828008909004/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37204 DF PROTO=TCP SPT=37494 DPT=9100 SEQ=1930480195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B762B0F0000000001030307) Nov 23 04:25:31 localhost python3.9[200989]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:32 localhost python3.9[201077]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889930.8298213-2306-205256363407542/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:32 localhost python3.9[201187]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:33 localhost python3.9[201275]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889932.4021661-2306-138004774276599/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27635 DF PROTO=TCP SPT=46828 DPT=9100 SEQ=3601523799 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B76360F0000000001030307) Nov 23 04:25:34 localhost python3.9[201385]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:34 localhost python3.9[201473]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889933.8632312-2306-254091329752486/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:35 localhost python3.9[201583]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:36 localhost python3.9[201697]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889935.0512846-2306-21530952936361/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:36 localhost python3.9[201852]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:25:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37206 DF PROTO=TCP SPT=37494 DPT=9100 SEQ=1930480195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7642CF0000000001030307) Nov 23 04:25:37 localhost python3.9[202017]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Nov 23 04:25:38 localhost python3.9[202145]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:25:38 localhost systemd[1]: Reloading. Nov 23 04:25:38 localhost systemd-rc-local-generator[202168]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:25:38 localhost systemd-sysv-generator[202174]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:39 localhost systemd[1]: Starting libvirt logging daemon socket... Nov 23 04:25:39 localhost systemd[1]: Listening on libvirt logging daemon socket. Nov 23 04:25:39 localhost systemd[1]: Starting libvirt logging daemon admin socket... Nov 23 04:25:39 localhost systemd[1]: Listening on libvirt logging daemon admin socket. Nov 23 04:25:39 localhost systemd[1]: Starting libvirt logging daemon... Nov 23 04:25:39 localhost systemd[1]: Started libvirt logging daemon. Nov 23 04:25:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5710 DF PROTO=TCP SPT=48322 DPT=9105 SEQ=3008087071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B764CCF0000000001030307) Nov 23 04:25:40 localhost python3.9[202296]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:25:40 localhost systemd[1]: Reloading. Nov 23 04:25:40 localhost systemd-rc-local-generator[202320]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:25:40 localhost systemd-sysv-generator[202326]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:40 localhost systemd[1]: Starting libvirt nodedev daemon socket... Nov 23 04:25:40 localhost systemd[1]: Listening on libvirt nodedev daemon socket. Nov 23 04:25:40 localhost systemd[1]: Starting libvirt nodedev daemon admin socket... Nov 23 04:25:40 localhost systemd[1]: Starting libvirt nodedev daemon read-only socket... Nov 23 04:25:40 localhost systemd[1]: Listening on libvirt nodedev daemon admin socket. Nov 23 04:25:40 localhost systemd[1]: Listening on libvirt nodedev daemon read-only socket. Nov 23 04:25:40 localhost systemd[1]: Started libvirt nodedev daemon. Nov 23 04:25:41 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Nov 23 04:25:41 localhost python3.9[202471]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:25:41 localhost systemd[1]: Reloading. Nov 23 04:25:41 localhost systemd-rc-local-generator[202493]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:25:41 localhost systemd-sysv-generator[202500]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:41 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Nov 23 04:25:41 localhost systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged. Nov 23 04:25:41 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service. Nov 23 04:25:41 localhost systemd[1]: Starting libvirt proxy daemon socket... Nov 23 04:25:41 localhost systemd[1]: Listening on libvirt proxy daemon socket. Nov 23 04:25:41 localhost systemd[1]: Starting libvirt proxy daemon admin socket... Nov 23 04:25:41 localhost systemd[1]: Starting libvirt proxy daemon read-only socket... Nov 23 04:25:41 localhost systemd[1]: Listening on libvirt proxy daemon admin socket. Nov 23 04:25:41 localhost systemd[1]: Listening on libvirt proxy daemon read-only socket. Nov 23 04:25:41 localhost systemd[1]: Started libvirt proxy daemon. Nov 23 04:25:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:25:42 localhost systemd[1]: tmp-crun.HrMLGP.mount: Deactivated successfully. Nov 23 04:25:42 localhost podman[202652]: 2025-11-23 09:25:42.302942104 +0000 UTC m=+0.102713792 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 23 04:25:42 localhost podman[202652]: 2025-11-23 09:25:42.397445298 +0000 UTC m=+0.197216977 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 04:25:42 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:25:42 localhost python3.9[202651]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:25:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:25:42 localhost systemd[1]: Reloading. Nov 23 04:25:42 localhost podman[202676]: 2025-11-23 09:25:42.615288797 +0000 UTC m=+0.102519365 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 23 04:25:42 localhost systemd-rc-local-generator[202717]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:25:42 localhost systemd-sysv-generator[202721]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:25:42 localhost podman[202676]: 2025-11-23 09:25:42.649537629 +0000 UTC m=+0.136768197 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 23 04:25:42 localhost setroubleshoot[202472]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ced0d57a-8f03-4ce3-a952-4be04185f9aa Nov 23 04:25:42 localhost setroubleshoot[202472]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Nov 23 04:25:42 localhost setroubleshoot[202472]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ced0d57a-8f03-4ce3-a952-4be04185f9aa Nov 23 04:25:42 localhost setroubleshoot[202472]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:42 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:25:42 localhost systemd[1]: Listening on libvirt locking daemon socket. Nov 23 04:25:42 localhost systemd[1]: Starting libvirt QEMU daemon socket... Nov 23 04:25:42 localhost systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 04:25:42 localhost systemd[1]: Starting Virtual Machine and Container Registration Service... Nov 23 04:25:42 localhost systemd[1]: Listening on libvirt QEMU daemon socket. Nov 23 04:25:42 localhost systemd[1]: Starting libvirt QEMU daemon admin socket... Nov 23 04:25:42 localhost systemd[1]: Starting libvirt QEMU daemon read-only socket... Nov 23 04:25:42 localhost systemd[1]: Listening on libvirt QEMU daemon admin socket. Nov 23 04:25:42 localhost systemd[1]: Listening on libvirt QEMU daemon read-only socket. Nov 23 04:25:42 localhost systemd[1]: Started Virtual Machine and Container Registration Service. Nov 23 04:25:42 localhost systemd[1]: Started libvirt QEMU daemon. Nov 23 04:25:43 localhost python3.9[202864]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:25:43 localhost systemd[1]: Reloading. Nov 23 04:25:43 localhost systemd-rc-local-generator[202890]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:25:43 localhost systemd-sysv-generator[202895]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:25:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5711 DF PROTO=TCP SPT=48322 DPT=9105 SEQ=3008087071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B765C910000000001030307) Nov 23 04:25:43 localhost systemd[1]: Starting libvirt secret daemon socket... Nov 23 04:25:43 localhost systemd[1]: Listening on libvirt secret daemon socket. Nov 23 04:25:43 localhost systemd[1]: Starting libvirt secret daemon admin socket... Nov 23 04:25:43 localhost systemd[1]: Starting libvirt secret daemon read-only socket... Nov 23 04:25:43 localhost systemd[1]: Listening on libvirt secret daemon admin socket. Nov 23 04:25:43 localhost systemd[1]: Listening on libvirt secret daemon read-only socket. Nov 23 04:25:43 localhost systemd[1]: Started libvirt secret daemon. Nov 23 04:25:44 localhost python3.9[203035]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:45 localhost python3.9[203145]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 23 04:25:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37207 DF PROTO=TCP SPT=37494 DPT=9100 SEQ=1930480195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B76640F0000000001030307) Nov 23 04:25:46 localhost python3.9[203255]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:25:47 localhost python3.9[203367]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 23 04:25:48 localhost python3.9[203475]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:48 localhost python3.9[203561]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889947.5805068-3170-168363946513516/.source.xml follow=False _original_basename=secret.xml.j2 checksum=08854374a51612ae60ccb5be5d56c7ff5bc71f08 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5964 DF PROTO=TCP SPT=45034 DPT=9882 SEQ=2549155682 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B76700F0000000001030307) Nov 23 04:25:49 localhost python3.9[203671]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 46550e70-79cb-5f55-bf6d-1204b97e083b#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:25:50 localhost python3.9[203791]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5712 DF PROTO=TCP SPT=48322 DPT=9105 SEQ=3008087071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B767C0F0000000001030307) Nov 23 04:25:52 localhost python3.9[204128]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:52 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully. Nov 23 04:25:52 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Nov 23 04:25:53 localhost python3.9[204238]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:53 localhost python3.9[204326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889952.8948045-3335-83726549594758/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:54 localhost python3.9[204436]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:55 localhost python3.9[204546]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:55 localhost python3.9[204603]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54012 DF PROTO=TCP SPT=47160 DPT=9102 SEQ=1997205071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B768DCF0000000001030307) Nov 23 04:25:56 localhost python3.9[204713]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:57 localhost python3.9[204770]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.o_n_4vrk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:57 localhost python3.9[204880]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:25:58 localhost python3.9[204937]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:25:58 localhost python3.9[205047]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:25:59 localhost python3[205158]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Nov 23 04:26:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56466 DF PROTO=TCP SPT=41114 DPT=9100 SEQ=1499784387 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B769C4E0000000001030307) Nov 23 04:26:00 localhost python3.9[205268]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:26:00 localhost python3.9[205325]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:26:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56467 DF PROTO=TCP SPT=41114 DPT=9100 SEQ=1499784387 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B76A04F0000000001030307) Nov 23 04:26:01 localhost python3.9[205435]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:26:02 localhost python3.9[205492]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:26:02 localhost python3.9[205602]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:26:03 localhost python3.9[205659]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:26:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17294 DF PROTO=TCP SPT=57514 DPT=9100 SEQ=855113025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B76AC0F0000000001030307) Nov 23 04:26:04 localhost python3.9[205769]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:26:05 localhost python3.9[205826]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:26:06 localhost python3.9[205936]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:26:07 localhost python3.9[206026]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889965.579372-3710-97154215444530/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:26:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56469 DF PROTO=TCP SPT=41114 DPT=9100 SEQ=1499784387 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B76B80F0000000001030307) Nov 23 04:26:07 localhost python3.9[206136]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:26:08 localhost python3.9[206246]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:26:09 localhost python3.9[206359]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:26:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:26:09.706 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:26:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:26:09.707 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:26:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:26:09.708 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:26:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27563 DF PROTO=TCP SPT=36718 DPT=9105 SEQ=2949128662 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B76C1D00000000001030307) Nov 23 04:26:10 localhost python3.9[206469]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:26:11 localhost python3.9[206580]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:26:11 localhost python3.9[206692]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:26:12 localhost python3.9[206805]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:26:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:26:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:26:12 localhost podman[206823]: 2025-11-23 09:26:12.925044501 +0000 UTC m=+0.109399927 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 04:26:13 localhost podman[206823]: 2025-11-23 09:26:13.001375356 +0000 UTC m=+0.185730782 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118) Nov 23 04:26:13 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:26:13 localhost podman[206879]: 2025-11-23 09:26:13.083867146 +0000 UTC m=+0.153800186 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent) Nov 23 04:26:13 localhost podman[206879]: 2025-11-23 09:26:13.093235706 +0000 UTC m=+0.163168776 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent) Nov 23 04:26:13 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:26:13 localhost python3.9[206957]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:26:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27564 DF PROTO=TCP SPT=36718 DPT=9105 SEQ=2949128662 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B76D1900000000001030307) Nov 23 04:26:13 localhost python3.9[207045]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889972.871308-3926-38924345563964/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:26:14 localhost python3.9[207155]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:26:14 localhost python3.9[207243]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889974.0077374-3971-201374172801377/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:26:15 localhost python3.9[207353]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:26:16 localhost python3.9[207441]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889975.1561966-4016-196841600628362/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:26:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32980 DF PROTO=TCP SPT=33866 DPT=9882 SEQ=754902979 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B76DE530000000001030307) Nov 23 04:26:18 localhost python3.9[207551]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:26:18 localhost systemd[1]: Reloading. Nov 23 04:26:18 localhost systemd-sysv-generator[207577]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:26:18 localhost systemd-rc-local-generator[207573]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:18 localhost systemd[1]: Reached target edpm_libvirt.target. Nov 23 04:26:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4937 DF PROTO=TCP SPT=45768 DPT=9882 SEQ=575740257 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B76E6100000000001030307) Nov 23 04:26:19 localhost python3.9[207701]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None Nov 23 04:26:19 localhost systemd[1]: Reloading. Nov 23 04:26:20 localhost systemd-rc-local-generator[207725]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:26:20 localhost systemd-sysv-generator[207730]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:20 localhost systemd[1]: Reloading. Nov 23 04:26:20 localhost systemd-sysv-generator[207771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:26:20 localhost systemd-rc-local-generator[207766]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:21 localhost systemd[1]: session-52.scope: Deactivated successfully. Nov 23 04:26:21 localhost systemd[1]: session-52.scope: Consumed 3min 49.125s CPU time. Nov 23 04:26:21 localhost systemd-logind[760]: Session 52 logged out. Waiting for processes to exit. Nov 23 04:26:21 localhost systemd-logind[760]: Removed session 52. Nov 23 04:26:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27565 DF PROTO=TCP SPT=36718 DPT=9105 SEQ=2949128662 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B76F20F0000000001030307) Nov 23 04:26:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54752 DF PROTO=TCP SPT=43696 DPT=9102 SEQ=401776578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7703100000000001030307) Nov 23 04:26:26 localhost sshd[207793]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:26:26 localhost systemd-logind[760]: New session 53 of user zuul. Nov 23 04:26:26 localhost systemd[1]: Started Session 53 of User zuul. Nov 23 04:26:27 localhost python3.9[207904]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:26:28 localhost python3.9[208016]: ansible-ansible.builtin.service_facts Invoked Nov 23 04:26:29 localhost network[208033]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 23 04:26:29 localhost network[208034]: 'network-scripts' will be removed from distribution in near future. Nov 23 04:26:29 localhost network[208035]: It is advised to switch to 'NetworkManager' instead for network management. Nov 23 04:26:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:26:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17317 DF PROTO=TCP SPT=43728 DPT=9100 SEQ=3991946687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B77117E0000000001030307) Nov 23 04:26:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17318 DF PROTO=TCP SPT=43728 DPT=9100 SEQ=3991946687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B77158F0000000001030307) Nov 23 04:26:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37209 DF PROTO=TCP SPT=37494 DPT=9100 SEQ=1930480195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7722100000000001030307) Nov 23 04:26:34 localhost python3.9[208267]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 04:26:35 localhost python3.9[208330]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:26:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17320 DF PROTO=TCP SPT=43728 DPT=9100 SEQ=3991946687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B772D4F0000000001030307) Nov 23 04:26:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27395 DF PROTO=TCP SPT=58820 DPT=9105 SEQ=2311989031 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7737100000000001030307) Nov 23 04:26:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:26:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:26:43 localhost podman[208528]: 2025-11-23 09:26:43.345390961 +0000 UTC m=+0.092288434 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Nov 23 04:26:43 localhost podman[208529]: 2025-11-23 09:26:43.384958286 +0000 UTC m=+0.131280632 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Nov 23 04:26:43 localhost podman[208528]: 2025-11-23 09:26:43.402278774 +0000 UTC m=+0.149176237 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent) Nov 23 04:26:43 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:26:43 localhost python3.9[208527]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:26:43 localhost podman[208529]: 2025-11-23 09:26:43.465518398 +0000 UTC m=+0.211840764 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 04:26:43 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:26:43 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:15:63:3a MACPROTO=0800 SRC=18.219.193.156 DST=38.102.83.248 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=53605 DF PROTO=TCP SPT=38634 DPT=9090 SEQ=84047162 ACK=0 WINDOW=62727 RES=0x00 SYN URGP=0 OPT (020405B40402080A5623A5330000000001030307) Nov 23 04:26:44 localhost sshd[208684]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:26:44 localhost python3.9[208683]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi mode=preserve remote_src=True src=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi/ backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:26:45 localhost python3.9[208794]: ansible-ansible.legacy.command Invoked with _raw_params=mv "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi" "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi.adopted"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:26:46 localhost python3.9[208905]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:26:46 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:15:63:3a MACPROTO=0800 SRC=18.219.193.156 DST=38.102.83.248 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=53608 DF PROTO=TCP SPT=38634 DPT=9090 SEQ=84047162 ACK=0 WINDOW=62727 RES=0x00 SYN URGP=0 OPT (020405B40402080A5623B1230000000001030307) Nov 23 04:26:47 localhost python3.9[209016]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:26:48 localhost python3.9[209127]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:26:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32985 DF PROTO=TCP SPT=33866 DPT=9882 SEQ=754902979 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B775A0F0000000001030307) Nov 23 04:26:49 localhost python3.9[209239]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:26:50 localhost python3.9[209349]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:26:51 localhost systemd[1]: Listening on Open-iSCSI iscsid Socket. Nov 23 04:26:51 localhost python3.9[209463]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:26:51 localhost systemd[1]: Reloading. Nov 23 04:26:51 localhost systemd-rc-local-generator[209488]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:26:51 localhost systemd-sysv-generator[209493]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:26:52 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:52 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:52 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:52 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:26:52 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:52 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:52 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:52 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:26:52 localhost systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi). Nov 23 04:26:52 localhost systemd[1]: Starting Open-iSCSI... Nov 23 04:26:52 localhost iscsid[209504]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 23 04:26:52 localhost iscsid[209504]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 23 04:26:52 localhost iscsid[209504]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 23 04:26:52 localhost iscsid[209504]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 23 04:26:52 localhost iscsid[209504]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 23 04:26:52 localhost iscsid[209504]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 23 04:26:52 localhost iscsid[209504]: iscsid: can't open iscsid.ipc_auth_uid configuration file /etc/iscsi/iscsid.conf Nov 23 04:26:52 localhost systemd[1]: Started Open-iSCSI. Nov 23 04:26:52 localhost systemd[1]: Starting Logout off all iSCSI sessions on shutdown... Nov 23 04:26:52 localhost systemd[1]: Finished Logout off all iSCSI sessions on shutdown. Nov 23 04:26:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37354 DF PROTO=TCP SPT=54440 DPT=9101 SEQ=2228557503 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B77680F0000000001030307) Nov 23 04:26:53 localhost python3.9[209615]: ansible-ansible.builtin.service_facts Invoked Nov 23 04:26:53 localhost network[209632]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 23 04:26:53 localhost network[209633]: 'network-scripts' will be removed from distribution in near future. Nov 23 04:26:53 localhost network[209634]: It is advised to switch to 'NetworkManager' instead for network management. Nov 23 04:26:54 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Nov 23 04:26:55 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Nov 23 04:26:55 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service. Nov 23 04:26:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:26:56 localhost setroubleshoot[209672]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 6af070e3-696c-45c4-a359-8e0152429e5f Nov 23 04:26:56 localhost setroubleshoot[209672]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Nov 23 04:26:56 localhost setroubleshoot[209672]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 6af070e3-696c-45c4-a359-8e0152429e5f Nov 23 04:26:56 localhost setroubleshoot[209672]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Nov 23 04:26:56 localhost setroubleshoot[209672]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 6af070e3-696c-45c4-a359-8e0152429e5f Nov 23 04:26:56 localhost setroubleshoot[209672]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Nov 23 04:26:56 localhost setroubleshoot[209672]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 6af070e3-696c-45c4-a359-8e0152429e5f Nov 23 04:26:56 localhost setroubleshoot[209672]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Nov 23 04:26:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31487 DF PROTO=TCP SPT=58764 DPT=9102 SEQ=3094243754 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B77780F0000000001030307) Nov 23 04:26:56 localhost setroubleshoot[209672]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 6af070e3-696c-45c4-a359-8e0152429e5f Nov 23 04:26:56 localhost setroubleshoot[209672]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Nov 23 04:26:56 localhost setroubleshoot[209672]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 6af070e3-696c-45c4-a359-8e0152429e5f Nov 23 04:26:56 localhost setroubleshoot[209672]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Nov 23 04:26:59 localhost python3.9[209883]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Nov 23 04:27:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20820 DF PROTO=TCP SPT=37580 DPT=9100 SEQ=4107040265 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7786AE0000000001030307) Nov 23 04:27:00 localhost python3.9[209993]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Nov 23 04:27:00 localhost python3.9[210107]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:27:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20821 DF PROTO=TCP SPT=37580 DPT=9100 SEQ=4107040265 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B778ACF0000000001030307) Nov 23 04:27:01 localhost python3.9[210195]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890020.4558392-455-96561817064086/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:01 localhost sshd[210213]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:27:02 localhost python3.9[210306]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:03 localhost python3.9[210417]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:27:03 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 04:27:03 localhost systemd[1]: Stopped Load Kernel Modules. Nov 23 04:27:03 localhost systemd[1]: Stopping Load Kernel Modules... Nov 23 04:27:03 localhost systemd[1]: Starting Load Kernel Modules... Nov 23 04:27:03 localhost systemd-modules-load[210421]: Module 'msr' is built in Nov 23 04:27:03 localhost systemd[1]: Finished Load Kernel Modules. Nov 23 04:27:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56472 DF PROTO=TCP SPT=41114 DPT=9100 SEQ=1499784387 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B77960F0000000001030307) Nov 23 04:27:04 localhost python3.9[210531]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:27:05 localhost python3.9[210641]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:27:05 localhost python3.9[210751]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:27:06 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully. Nov 23 04:27:06 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Nov 23 04:27:06 localhost python3.9[210861]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:27:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63413 DF PROTO=TCP SPT=44164 DPT=9105 SEQ=2573864803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B77A03A0000000001030307) Nov 23 04:27:07 localhost python3.9[210949]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890026.1052237-629-106465565510671/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:07 localhost python3.9[211059]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:27:08 localhost python3.9[211170]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:27:09.708 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:27:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:27:09.709 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:27:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:27:09.709 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:27:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63415 DF PROTO=TCP SPT=44164 DPT=9105 SEQ=2573864803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B77AC4F0000000001030307) Nov 23 04:27:09 localhost python3.9[211280]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:10 localhost python3.9[211390]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:11 localhost python3.9[211500]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:12 localhost python3.9[211610]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:13 localhost python3.9[211720]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:27:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:27:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63416 DF PROTO=TCP SPT=44164 DPT=9105 SEQ=2573864803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B77BC100000000001030307) Nov 23 04:27:13 localhost podman[211830]: 2025-11-23 09:27:13.928972516 +0000 UTC m=+0.100040192 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 23 04:27:13 localhost podman[211830]: 2025-11-23 09:27:13.934869893 +0000 UTC m=+0.105937519 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 04:27:13 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:27:13 localhost podman[211836]: 2025-11-23 09:27:13.987433565 +0000 UTC m=+0.136674334 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 04:27:14 localhost podman[211836]: 2025-11-23 09:27:14.028598763 +0000 UTC m=+0.177839502 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, tcib_managed=true) Nov 23 04:27:14 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:27:14 localhost python3.9[211837]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:14 localhost python3.9[211982]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:27:15 localhost python3.9[212094]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:16 localhost python3.9[212204]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:27:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59708 DF PROTO=TCP SPT=38534 DPT=9882 SEQ=1719082362 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B77C8B30000000001030307) Nov 23 04:27:17 localhost python3.9[212314]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:27:17 localhost python3.9[212371]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:27:18 localhost python3.9[212481]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:27:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36734 DF PROTO=TCP SPT=55822 DPT=9882 SEQ=1338719780 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B77D00F0000000001030307) Nov 23 04:27:19 localhost python3.9[212538]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:27:19 localhost python3.9[212648]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:20 localhost python3.9[212758]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:27:20 localhost python3.9[212815]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:21 localhost python3.9[212925]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:27:21 localhost python3.9[212982]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63417 DF PROTO=TCP SPT=44164 DPT=9105 SEQ=2573864803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B77DC0F0000000001030307) Nov 23 04:27:22 localhost python3.9[213092]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:27:22 localhost systemd[1]: Reloading. Nov 23 04:27:22 localhost systemd-rc-local-generator[213117]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:27:22 localhost systemd-sysv-generator[213121]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:27:22 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:22 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:22 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:22 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:27:22 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:22 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:22 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:22 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:24 localhost python3.9[213240]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:27:25 localhost python3.9[213297]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:25 localhost python3.9[213407]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:27:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=799 DF PROTO=TCP SPT=52808 DPT=9102 SEQ=3316474604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B77ED4F0000000001030307) Nov 23 04:27:27 localhost python3.9[213464]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:28 localhost python3.9[213574]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:27:28 localhost systemd[1]: Reloading. Nov 23 04:27:28 localhost systemd-rc-local-generator[213598]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:27:28 localhost systemd-sysv-generator[213601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:28 localhost systemd[1]: Starting Create netns directory... Nov 23 04:27:28 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 23 04:27:28 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 23 04:27:28 localhost systemd[1]: Finished Create netns directory. Nov 23 04:27:29 localhost python3.9[213726]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:27:30 localhost python3.9[213836]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:27:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60230 DF PROTO=TCP SPT=56038 DPT=9100 SEQ=3475491981 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B77FBE00000000001030307) Nov 23 04:27:30 localhost python3.9[213924]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890049.7018812-1250-166052903705926/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 23 04:27:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60231 DF PROTO=TCP SPT=56038 DPT=9100 SEQ=3475491981 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B77FFCF0000000001030307) Nov 23 04:27:31 localhost python3.9[214034]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:27:32 localhost python3.9[214144]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:27:32 localhost python3.9[214232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890051.943128-1325-99795030927968/.source.json _original_basename=.a80o8bhx follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:33 localhost python3.9[214342]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17323 DF PROTO=TCP SPT=43728 DPT=9100 SEQ=3991946687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B780C100000000001030307) Nov 23 04:27:36 localhost python3.9[214650]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Nov 23 04:27:37 localhost python3.9[214760]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 23 04:27:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60233 DF PROTO=TCP SPT=56038 DPT=9100 SEQ=3475491981 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7817900000000001030307) Nov 23 04:27:38 localhost python3.9[214870]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Nov 23 04:27:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13310 DF PROTO=TCP SPT=41808 DPT=9105 SEQ=2413329906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B78218F0000000001030307) Nov 23 04:27:40 localhost podman[215023]: 2025-11-23 09:27:40.467154216 +0000 UTC m=+0.097409721 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, version=7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, release=553, distribution-scope=public) Nov 23 04:27:40 localhost podman[215023]: 2025-11-23 09:27:40.569113334 +0000 UTC m=+0.199368849 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, name=rhceph, CEPH_POINT_RELEASE=, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, version=7, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main) Nov 23 04:27:40 localhost systemd[1]: virtnodedevd.service: Deactivated successfully. Nov 23 04:27:41 localhost systemd[1]: virtproxyd.service: Deactivated successfully. Nov 23 04:27:42 localhost python3[215267]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Nov 23 04:27:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13311 DF PROTO=TCP SPT=41808 DPT=9105 SEQ=2413329906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7831500000000001030307) Nov 23 04:27:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:27:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:27:44 localhost podman[215307]: 2025-11-23 09:27:44.710555814 +0000 UTC m=+0.290860002 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 04:27:44 localhost podman[215281]: 2025-11-23 09:27:43.039801808 +0000 UTC m=+0.044047237 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Nov 23 04:27:44 localhost podman[215307]: 2025-11-23 09:27:44.751579109 +0000 UTC m=+0.331883297 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Nov 23 04:27:44 localhost systemd[1]: tmp-crun.OVQUkw.mount: Deactivated successfully. Nov 23 04:27:44 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:27:44 localhost podman[215306]: 2025-11-23 09:27:44.773850909 +0000 UTC m=+0.354160787 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 23 04:27:44 localhost podman[215306]: 2025-11-23 09:27:44.784289822 +0000 UTC m=+0.364599700 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Nov 23 04:27:44 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:27:44 localhost podman[215373]: Nov 23 04:27:44 localhost podman[215373]: 2025-11-23 09:27:44.965169684 +0000 UTC m=+0.090598486 container create 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible) Nov 23 04:27:44 localhost podman[215373]: 2025-11-23 09:27:44.923024746 +0000 UTC m=+0.048453568 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Nov 23 04:27:44 localhost python3[215267]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Nov 23 04:27:45 localhost python3.9[215519]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:27:46 localhost python3.9[215631]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63279 DF PROTO=TCP SPT=54830 DPT=9882 SEQ=1866655706 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B783DE20000000001030307) Nov 23 04:27:47 localhost python3.9[215686]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:27:48 localhost python3.9[215795]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890067.4077518-1589-85791019636014/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:48 localhost python3.9[215850]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:27:48 localhost systemd[1]: Reloading. Nov 23 04:27:48 localhost systemd-rc-local-generator[215877]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:27:48 localhost systemd-sysv-generator[215881]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16329 DF PROTO=TCP SPT=52110 DPT=9102 SEQ=949626794 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7846C40000000001030307) Nov 23 04:27:49 localhost python3.9[215941]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:27:49 localhost systemd[1]: Reloading. Nov 23 04:27:49 localhost systemd-rc-local-generator[215967]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:27:49 localhost systemd-sysv-generator[215970]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:27:49 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:27:49 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:49 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:49 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:27:49 localhost systemd[1]: Starting multipathd container... Nov 23 04:27:50 localhost systemd[1]: Started libcrun container. Nov 23 04:27:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0adaf4a2e44d50ff99476d283be71c3d16b06a75525f88fca1af5df668379e19/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Nov 23 04:27:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0adaf4a2e44d50ff99476d283be71c3d16b06a75525f88fca1af5df668379e19/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Nov 23 04:27:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:27:50 localhost podman[215982]: 2025-11-23 09:27:50.107542208 +0000 UTC m=+0.152044076 container init 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 23 04:27:50 localhost multipathd[215997]: + sudo -E kolla_set_configs Nov 23 04:27:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:27:50 localhost podman[215982]: 2025-11-23 09:27:50.160855792 +0000 UTC m=+0.205357680 container start 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 23 04:27:50 localhost podman[215982]: multipathd Nov 23 04:27:50 localhost systemd[1]: Started multipathd container. Nov 23 04:27:50 localhost multipathd[215997]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 23 04:27:50 localhost multipathd[215997]: INFO:__main__:Validating config file Nov 23 04:27:50 localhost multipathd[215997]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 23 04:27:50 localhost multipathd[215997]: INFO:__main__:Writing out command to execute Nov 23 04:27:50 localhost multipathd[215997]: ++ cat /run_command Nov 23 04:27:50 localhost multipathd[215997]: + CMD='/usr/sbin/multipathd -d' Nov 23 04:27:50 localhost multipathd[215997]: + ARGS= Nov 23 04:27:50 localhost multipathd[215997]: + sudo kolla_copy_cacerts Nov 23 04:27:50 localhost multipathd[215997]: Running command: '/usr/sbin/multipathd -d' Nov 23 04:27:50 localhost multipathd[215997]: + [[ ! -n '' ]] Nov 23 04:27:50 localhost multipathd[215997]: + . kolla_extend_start Nov 23 04:27:50 localhost multipathd[215997]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Nov 23 04:27:50 localhost multipathd[215997]: + umask 0022 Nov 23 04:27:50 localhost multipathd[215997]: + exec /usr/sbin/multipathd -d Nov 23 04:27:50 localhost multipathd[215997]: 10006.719174 | --------start up-------- Nov 23 04:27:50 localhost multipathd[215997]: 10006.719202 | read /etc/multipath.conf Nov 23 04:27:50 localhost multipathd[215997]: 10006.724099 | path checkers start up Nov 23 04:27:50 localhost podman[216005]: 2025-11-23 09:27:50.24289206 +0000 UTC m=+0.089173134 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Nov 23 04:27:50 localhost podman[216005]: 2025-11-23 09:27:50.254550761 +0000 UTC m=+0.100831865 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:27:50 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:27:51 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Nov 23 04:27:51 localhost systemd[1]: virtqemud.service: Deactivated successfully. Nov 23 04:27:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4611 DF PROTO=TCP SPT=54666 DPT=9101 SEQ=3121326920 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B78520F0000000001030307) Nov 23 04:27:52 localhost python3.9[216145]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:27:53 localhost python3.9[216257]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:27:54 localhost python3.9[216380]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:27:54 localhost systemd[1]: Stopping multipathd container... Nov 23 04:27:54 localhost multipathd[215997]: 10010.822200 | exit (signal) Nov 23 04:27:54 localhost multipathd[215997]: 10010.822709 | --------shut down------- Nov 23 04:27:54 localhost systemd[1]: libpod-7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.scope: Deactivated successfully. Nov 23 04:27:54 localhost podman[216384]: 2025-11-23 09:27:54.380943288 +0000 UTC m=+0.105795814 container died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:27:54 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.timer: Deactivated successfully. Nov 23 04:27:54 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:27:54 localhost systemd[1]: tmp-crun.jnUgK3.mount: Deactivated successfully. Nov 23 04:27:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e-userdata-shm.mount: Deactivated successfully. Nov 23 04:27:54 localhost podman[216384]: 2025-11-23 09:27:54.51630015 +0000 UTC m=+0.241152676 container cleanup 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3) Nov 23 04:27:54 localhost podman[216384]: multipathd Nov 23 04:27:54 localhost podman[216410]: 2025-11-23 09:27:54.624509146 +0000 UTC m=+0.068742130 container cleanup 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, container_name=multipathd) Nov 23 04:27:54 localhost podman[216410]: multipathd Nov 23 04:27:54 localhost systemd[1]: edpm_multipathd.service: Deactivated successfully. Nov 23 04:27:54 localhost systemd[1]: Stopped multipathd container. Nov 23 04:27:54 localhost systemd[1]: Starting multipathd container... Nov 23 04:27:54 localhost systemd[1]: Started libcrun container. Nov 23 04:27:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0adaf4a2e44d50ff99476d283be71c3d16b06a75525f88fca1af5df668379e19/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Nov 23 04:27:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0adaf4a2e44d50ff99476d283be71c3d16b06a75525f88fca1af5df668379e19/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Nov 23 04:27:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14753 DF PROTO=TCP SPT=50232 DPT=9101 SEQ=1653352054 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B785C100000000001030307) Nov 23 04:27:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:27:54 localhost podman[216423]: 2025-11-23 09:27:54.808606655 +0000 UTC m=+0.143559581 container init 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS) Nov 23 04:27:54 localhost multipathd[216437]: + sudo -E kolla_set_configs Nov 23 04:27:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:27:54 localhost podman[216423]: 2025-11-23 09:27:54.844631539 +0000 UTC m=+0.179584435 container start 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:27:54 localhost podman[216423]: multipathd Nov 23 04:27:54 localhost systemd[1]: Started multipathd container. Nov 23 04:27:54 localhost multipathd[216437]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 23 04:27:54 localhost multipathd[216437]: INFO:__main__:Validating config file Nov 23 04:27:54 localhost multipathd[216437]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 23 04:27:54 localhost multipathd[216437]: INFO:__main__:Writing out command to execute Nov 23 04:27:54 localhost multipathd[216437]: ++ cat /run_command Nov 23 04:27:54 localhost multipathd[216437]: + CMD='/usr/sbin/multipathd -d' Nov 23 04:27:54 localhost multipathd[216437]: + ARGS= Nov 23 04:27:54 localhost multipathd[216437]: + sudo kolla_copy_cacerts Nov 23 04:27:54 localhost multipathd[216437]: + [[ ! -n '' ]] Nov 23 04:27:54 localhost multipathd[216437]: + . kolla_extend_start Nov 23 04:27:54 localhost multipathd[216437]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Nov 23 04:27:54 localhost multipathd[216437]: Running command: '/usr/sbin/multipathd -d' Nov 23 04:27:54 localhost multipathd[216437]: + umask 0022 Nov 23 04:27:54 localhost multipathd[216437]: + exec /usr/sbin/multipathd -d Nov 23 04:27:54 localhost multipathd[216437]: 10011.421266 | --------start up-------- Nov 23 04:27:54 localhost multipathd[216437]: 10011.421289 | read /etc/multipath.conf Nov 23 04:27:54 localhost multipathd[216437]: 10011.425798 | path checkers start up Nov 23 04:27:54 localhost podman[216445]: 2025-11-23 09:27:54.945853984 +0000 UTC m=+0.096233147 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 04:27:54 localhost podman[216445]: 2025-11-23 09:27:54.953532745 +0000 UTC m=+0.103911918 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:27:54 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:27:56 localhost python3.9[216585]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:27:57 localhost python3.9[216695]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Nov 23 04:27:58 localhost python3.9[216805]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Nov 23 04:27:58 localhost python3.9[216923]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:27:59 localhost python3.9[217011]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890078.436068-1829-127581584293099/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53925 DF PROTO=TCP SPT=44078 DPT=9100 SEQ=1385511664 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B78710D0000000001030307) Nov 23 04:28:00 localhost python3.9[217121]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53926 DF PROTO=TCP SPT=44078 DPT=9100 SEQ=1385511664 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B78750F0000000001030307) Nov 23 04:28:01 localhost python3.9[217231]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:28:01 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 04:28:01 localhost systemd[1]: Stopped Load Kernel Modules. Nov 23 04:28:01 localhost systemd[1]: Stopping Load Kernel Modules... Nov 23 04:28:01 localhost systemd[1]: Starting Load Kernel Modules... Nov 23 04:28:01 localhost systemd-modules-load[217235]: Module 'msr' is built in Nov 23 04:28:01 localhost systemd[1]: Finished Load Kernel Modules. Nov 23 04:28:02 localhost python3.9[217345]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:28:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20826 DF PROTO=TCP SPT=37580 DPT=9100 SEQ=4107040265 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B78800F0000000001030307) Nov 23 04:28:06 localhost systemd[1]: Reloading. Nov 23 04:28:06 localhost systemd-rc-local-generator[217379]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:28:06 localhost systemd-sysv-generator[217385]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:06 localhost systemd[1]: Reloading. Nov 23 04:28:06 localhost systemd-rc-local-generator[217414]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:28:06 localhost systemd-sysv-generator[217418]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:07 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button) Nov 23 04:28:07 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Nov 23 04:28:07 localhost lvm[217467]: PV /dev/loop4 online, VG ceph_vg1 is complete. Nov 23 04:28:07 localhost lvm[217465]: PV /dev/loop3 online, VG ceph_vg0 is complete. Nov 23 04:28:07 localhost lvm[217467]: VG ceph_vg1 finished Nov 23 04:28:07 localhost lvm[217465]: VG ceph_vg0 finished Nov 23 04:28:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53928 DF PROTO=TCP SPT=44078 DPT=9100 SEQ=1385511664 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B788CCF0000000001030307) Nov 23 04:28:07 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 23 04:28:07 localhost systemd[1]: Starting man-db-cache-update.service... Nov 23 04:28:07 localhost systemd[1]: Reloading. Nov 23 04:28:07 localhost systemd-sysv-generator[217516]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:28:07 localhost systemd-rc-local-generator[217513]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:07 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 23 04:28:08 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 23 04:28:08 localhost systemd[1]: Finished man-db-cache-update.service. Nov 23 04:28:08 localhost systemd[1]: man-db-cache-update.service: Consumed 1.496s CPU time. Nov 23 04:28:08 localhost systemd[1]: run-rc10981c77ebb43dfb88f89c13ebac36b.service: Deactivated successfully. Nov 23 04:28:09 localhost python3.9[218764]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:28:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:28:09.709 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:28:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:28:09.711 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:28:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:28:09.711 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:28:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37131 DF PROTO=TCP SPT=41614 DPT=9105 SEQ=3343453901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B78968F0000000001030307) Nov 23 04:28:10 localhost python3.9[218878]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:11 localhost python3.9[218988]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:28:11 localhost systemd[1]: Reloading. Nov 23 04:28:11 localhost systemd-sysv-generator[219018]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:28:12 localhost systemd-rc-local-generator[219014]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:13 localhost python3.9[219132]: ansible-ansible.builtin.service_facts Invoked Nov 23 04:28:13 localhost network[219149]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 23 04:28:13 localhost network[219150]: 'network-scripts' will be removed from distribution in near future. Nov 23 04:28:13 localhost network[219151]: It is advised to switch to 'NetworkManager' instead for network management. Nov 23 04:28:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37132 DF PROTO=TCP SPT=41614 DPT=9105 SEQ=3343453901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B78A6500000000001030307) Nov 23 04:28:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:28:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:28:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:28:14 localhost podman[219218]: 2025-11-23 09:28:14.900693275 +0000 UTC m=+0.096174117 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:28:14 localhost podman[219218]: 2025-11-23 09:28:14.946873078 +0000 UTC m=+0.142353930 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:28:14 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:28:15 localhost podman[219236]: 2025-11-23 09:28:15.039723469 +0000 UTC m=+0.141679762 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 04:28:15 localhost podman[219236]: 2025-11-23 09:28:15.051422499 +0000 UTC m=+0.153378812 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS) Nov 23 04:28:15 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:28:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53929 DF PROTO=TCP SPT=44078 DPT=9100 SEQ=1385511664 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B78AE0F0000000001030307) Nov 23 04:28:18 localhost python3.9[219429]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:28:18 localhost python3.9[219540]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:28:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63284 DF PROTO=TCP SPT=54830 DPT=9882 SEQ=1866655706 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B78BA0F0000000001030307) Nov 23 04:28:20 localhost python3.9[219651]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:28:21 localhost python3.9[219762]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:28:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37133 DF PROTO=TCP SPT=41614 DPT=9105 SEQ=3343453901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B78C60F0000000001030307) Nov 23 04:28:22 localhost python3.9[219873]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:28:23 localhost python3.9[219984]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:28:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:28:25 localhost systemd[1]: tmp-crun.67EJre.mount: Deactivated successfully. Nov 23 04:28:25 localhost podman[220095]: 2025-11-23 09:28:25.839322232 +0000 UTC m=+0.058882844 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 04:28:25 localhost podman[220095]: 2025-11-23 09:28:25.850495807 +0000 UTC m=+0.070056439 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Nov 23 04:28:25 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:28:26 localhost python3.9[220096]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:28:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6013 DF PROTO=TCP SPT=36638 DPT=9102 SEQ=2778346917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B78D7CF0000000001030307) Nov 23 04:28:26 localhost python3.9[220225]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:28:28 localhost python3.9[220336]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:28 localhost python3.9[220446]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:29 localhost python3.9[220556]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:30 localhost python3.9[220666]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28302 DF PROTO=TCP SPT=38256 DPT=9100 SEQ=2225063653 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B78E63E0000000001030307) Nov 23 04:28:30 localhost python3.9[220776]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28303 DF PROTO=TCP SPT=38256 DPT=9100 SEQ=2225063653 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B78EA4F0000000001030307) Nov 23 04:28:31 localhost python3.9[220886]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:31 localhost python3.9[220996]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:32 localhost python3.9[221106]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:33 localhost python3.9[221216]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:33 localhost python3.9[221326]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60236 DF PROTO=TCP SPT=56038 DPT=9100 SEQ=3475491981 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B78F60F0000000001030307) Nov 23 04:28:34 localhost python3.9[221436]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:35 localhost python3.9[221546]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:35 localhost python3.9[221656]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:36 localhost python3.9[221766]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28305 DF PROTO=TCP SPT=38256 DPT=9100 SEQ=2225063653 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B79020F0000000001030307) Nov 23 04:28:37 localhost python3.9[221876]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:38 localhost python3.9[221986]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:28:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18694 DF PROTO=TCP SPT=49048 DPT=9105 SEQ=1081504356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B790BCF0000000001030307) Nov 23 04:28:40 localhost python3.9[222096]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:28:41 localhost python3.9[222206]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 23 04:28:42 localhost python3.9[222316]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:28:42 localhost systemd[1]: Reloading. Nov 23 04:28:42 localhost systemd-rc-local-generator[222342]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:28:42 localhost systemd-sysv-generator[222345]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:28:43 localhost python3.9[222513]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:28:43 localhost python3.9[222641]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:28:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18695 DF PROTO=TCP SPT=49048 DPT=9105 SEQ=1081504356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B791B8F0000000001030307) Nov 23 04:28:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:28:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:28:45 localhost podman[222771]: 2025-11-23 09:28:45.293140903 +0000 UTC m=+0.093677485 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent) Nov 23 04:28:45 localhost podman[222771]: 2025-11-23 09:28:45.303236267 +0000 UTC m=+0.103772889 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 23 04:28:45 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:28:45 localhost podman[222772]: 2025-11-23 09:28:45.349895603 +0000 UTC m=+0.150453886 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118) Nov 23 04:28:45 localhost python3.9[222770]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:28:45 localhost podman[222772]: 2025-11-23 09:28:45.428394406 +0000 UTC m=+0.228952709 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:28:45 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:28:45 localhost python3.9[222924]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:28:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16412 DF PROTO=TCP SPT=49826 DPT=9882 SEQ=3995809133 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7928420000000001030307) Nov 23 04:28:47 localhost python3.9[223035]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:28:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14980 DF PROTO=TCP SPT=47516 DPT=9882 SEQ=3577361259 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7930100000000001030307) Nov 23 04:28:49 localhost python3.9[223146]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:28:49 localhost python3.9[223257]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:28:51 localhost python3.9[223368]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:28:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43429 DF PROTO=TCP SPT=47340 DPT=9101 SEQ=594326414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B793C0F0000000001030307) Nov 23 04:28:53 localhost python3.9[223479]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:28:54 localhost python3.9[223589]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:28:54 localhost python3.9[223699]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:28:55 localhost python3.9[223809]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:28:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:28:56 localhost podman[223920]: 2025-11-23 09:28:56.189606843 +0000 UTC m=+0.083969223 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Nov 23 04:28:56 localhost podman[223920]: 2025-11-23 09:28:56.200433968 +0000 UTC m=+0.094796378 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 04:28:56 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:28:56 localhost python3.9[223919]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:28:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46592 DF PROTO=TCP SPT=40438 DPT=9102 SEQ=4170426706 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B794CCF0000000001030307) Nov 23 04:28:56 localhost python3.9[224048]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:28:57 localhost python3.9[224158]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:28:58 localhost python3.9[224268]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 23 04:28:58 localhost python3.9[224378]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 23 04:28:59 localhost python3.9[224488]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 23 04:29:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45837 DF PROTO=TCP SPT=53726 DPT=9100 SEQ=2810341873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B795B6F0000000001030307) Nov 23 04:29:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45838 DF PROTO=TCP SPT=53726 DPT=9100 SEQ=2810341873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B795F8F0000000001030307) Nov 23 04:29:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53931 DF PROTO=TCP SPT=44078 DPT=9100 SEQ=1385511664 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B796C100000000001030307) Nov 23 04:29:07 localhost python3.9[224598]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Nov 23 04:29:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45840 DF PROTO=TCP SPT=53726 DPT=9100 SEQ=2810341873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B79774F0000000001030307) Nov 23 04:29:08 localhost python3.9[224709]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Nov 23 04:29:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:29:09.711 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:29:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:29:09.711 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:29:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:29:09.712 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:29:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24276 DF PROTO=TCP SPT=49744 DPT=9105 SEQ=2141634455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B79810F0000000001030307) Nov 23 04:29:10 localhost python3.9[224825]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005532584.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Nov 23 04:29:11 localhost sshd[224851]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:29:11 localhost systemd-logind[760]: New session 54 of user zuul. Nov 23 04:29:11 localhost systemd[1]: Started Session 54 of User zuul. Nov 23 04:29:11 localhost systemd[1]: session-54.scope: Deactivated successfully. Nov 23 04:29:11 localhost systemd-logind[760]: Session 54 logged out. Waiting for processes to exit. Nov 23 04:29:11 localhost systemd-logind[760]: Removed session 54. Nov 23 04:29:12 localhost python3.9[224962]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:29:12 localhost python3.9[225048]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890151.7421017-3388-28714349999368/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:29:13 localhost python3.9[225156]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:29:13 localhost python3.9[225211]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:29:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24277 DF PROTO=TCP SPT=49744 DPT=9105 SEQ=2141634455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7990D00000000001030307) Nov 23 04:29:14 localhost python3.9[225319]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:29:14 localhost python3.9[225405]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890153.9818273-3388-206900667583894/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:29:15 localhost python3.9[225514]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:29:15 localhost systemd[1]: tmp-crun.Nnq5UZ.mount: Deactivated successfully. Nov 23 04:29:15 localhost podman[225592]: 2025-11-23 09:29:15.932556455 +0000 UTC m=+0.102091856 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller) Nov 23 04:29:16 localhost podman[225592]: 2025-11-23 09:29:16.034388593 +0000 UTC m=+0.203923974 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible) Nov 23 04:29:16 localhost python3.9[225612]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890155.103784-3388-38394429125559/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=4dc3e49f3c2a74cce1eec3b31509c1b3c95ac5ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:29:16 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:29:16 localhost podman[225588]: 2025-11-23 09:29:16.042584293 +0000 UTC m=+0.213166777 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS) Nov 23 04:29:16 localhost podman[225588]: 2025-11-23 09:29:16.122879381 +0000 UTC m=+0.293461825 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 23 04:29:16 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:29:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64248 DF PROTO=TCP SPT=42408 DPT=9882 SEQ=319278619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B799D730000000001030307) Nov 23 04:29:17 localhost python3.9[225752]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:29:17 localhost python3.9[225838]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890156.6727765-3388-203503163999024/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:29:18 localhost python3.9[225946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:29:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16417 DF PROTO=TCP SPT=49826 DPT=9882 SEQ=3995809133 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B79A4100000000001030307) Nov 23 04:29:19 localhost python3.9[226032]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890157.7560308-3388-42406134349277/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:29:20 localhost python3.9[226142]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:29:21 localhost python3.9[226252]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:29:22 localhost python3.9[226362]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:29:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57724 DF PROTO=TCP SPT=59228 DPT=9101 SEQ=2883545117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B79B20F0000000001030307) Nov 23 04:29:22 localhost python3.9[226474]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:29:23 localhost python3.9[226582]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:29:24 localhost python3.9[226692]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:29:24 localhost python3.9[226778]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890163.9177275-3763-96165947770158/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:29:25 localhost python3.9[226886]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:29:26 localhost python3.9[226972]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890165.1224017-3808-51007181172009/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:29:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29417 DF PROTO=TCP SPT=36246 DPT=9102 SEQ=971505304 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B79C20F0000000001030307) Nov 23 04:29:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:29:26 localhost systemd[1]: tmp-crun.F3sSYq.mount: Deactivated successfully. Nov 23 04:29:26 localhost podman[227068]: 2025-11-23 09:29:26.901942534 +0000 UTC m=+0.087066746 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 04:29:26 localhost podman[227068]: 2025-11-23 09:29:26.941456334 +0000 UTC m=+0.126580526 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 23 04:29:26 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:29:27 localhost python3.9[227095]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Nov 23 04:29:27 localhost python3.9[227211]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 23 04:29:28 localhost python3[227321]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Nov 23 04:29:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6491 DF PROTO=TCP SPT=37314 DPT=9100 SEQ=3214441920 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B79D09F0000000001030307) Nov 23 04:29:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6492 DF PROTO=TCP SPT=37314 DPT=9100 SEQ=3214441920 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B79D48F0000000001030307) Nov 23 04:29:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28308 DF PROTO=TCP SPT=38256 DPT=9100 SEQ=2225063653 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B79E00F0000000001030307) Nov 23 04:29:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6494 DF PROTO=TCP SPT=37314 DPT=9100 SEQ=3214441920 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B79EC4F0000000001030307) Nov 23 04:29:38 localhost podman[227334]: 2025-11-23 09:29:28.900288361 +0000 UTC m=+0.046085721 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Nov 23 04:29:39 localhost podman[227395]: Nov 23 04:29:39 localhost podman[227395]: 2025-11-23 09:29:39.262489972 +0000 UTC m=+0.130724503 container create da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:29:39 localhost podman[227395]: 2025-11-23 09:29:39.165932027 +0000 UTC m=+0.034166568 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Nov 23 04:29:39 localhost python3[227321]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init Nov 23 04:29:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13650 DF PROTO=TCP SPT=42356 DPT=9105 SEQ=1710561417 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B79F64F0000000001030307) Nov 23 04:29:40 localhost python3.9[227543]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:29:41 localhost python3.9[227655]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Nov 23 04:29:42 localhost python3.9[227765]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 23 04:29:43 localhost python3[227875]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Nov 23 04:29:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13651 DF PROTO=TCP SPT=42356 DPT=9105 SEQ=1710561417 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7A060F0000000001030307) Nov 23 04:29:44 localhost python3[227875]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66",#012 "Digest": "sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-11-21T06:33:31.011385583Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251118",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211770748,#012 "VirtualSize": 1211770748,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238/diff:/var/lib/containers/storage/overlay/0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c/diff:/var/lib/containers/storage/overlay/6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068/diff:/var/lib/containers/storage/overlay/ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6",#012 "sha256:573e98f577c8b1610c1485067040ff856a142394fcd22ad4cb9c66b7d1de6bef",#012 "sha256:5a71e5d7d31f15255619cb8b9384b708744757c93993652418b0f45b0c0931d5",#012 "sha256:b9b598b1eb0c08906fe1bc9a64fc0e72719a6197d83669d2eb4309e69a00aa62",#012 "sha256:33e3811ab7487b27336fdf94252d5a875b17efb438cbc4ffc943f851ad3eceb6"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251118",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-18T01:56:49.795434035Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6d427dd138d2b0977a7ef7feaa8bd82d04e99cc5f4a16d555d6cff0cb52d43c6 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-18T01:56:49.795512415Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251118\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-18T01:56:52.547242013Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-11-21T06:10:01.947310748Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947327778Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947358359Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947372589Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.94738527Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.94739397Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:02.324930938Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:36.349393468Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Nov 23 04:29:44 localhost podman[227941]: 2025-11-23 09:29:44.113903325 +0000 UTC m=+0.093724201 container remove bcdf0c06f50900ff23a9dbf422173d255488a7893e5f049e064c33b490b2a0a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, architecture=x86_64, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '97cfc313337c76270fcb8497fac0e51e-b43218eec4380850a20e0a337fdcf6cf'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 23 04:29:44 localhost python3[227875]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute Nov 23 04:29:44 localhost podman[227974]: Nov 23 04:29:44 localhost podman[227974]: 2025-11-23 09:29:44.220596391 +0000 UTC m=+0.088380237 container create e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:29:44 localhost podman[227974]: 2025-11-23 09:29:44.176817771 +0000 UTC m=+0.044601637 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Nov 23 04:29:44 localhost python3[227875]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start Nov 23 04:29:45 localhost python3.9[228152]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:29:45 localhost python3.9[228282]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:29:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:29:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:29:46 localhost systemd[1]: tmp-crun.iikT1a.mount: Deactivated successfully. Nov 23 04:29:46 localhost podman[228392]: 2025-11-23 09:29:46.934910618 +0000 UTC m=+0.143071001 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:29:46 localhost podman[228392]: 2025-11-23 09:29:46.969413904 +0000 UTC m=+0.177574337 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2) Nov 23 04:29:46 localhost python3.9[228391]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890186.3965564-4084-189914105487876/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:29:46 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:29:47 localhost podman[228393]: 2025-11-23 09:29:46.909118949 +0000 UTC m=+0.116930801 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible) Nov 23 04:29:47 localhost podman[228393]: 2025-11-23 09:29:47.057499202 +0000 UTC m=+0.265311024 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:29:47 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:29:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19981 DF PROTO=TCP SPT=40832 DPT=9882 SEQ=1465175765 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7A12A30000000001030307) Nov 23 04:29:47 localhost python3.9[228489]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:29:47 localhost systemd[1]: Reloading. Nov 23 04:29:47 localhost systemd-rc-local-generator[228510]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:29:47 localhost systemd-sysv-generator[228517]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:29:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64253 DF PROTO=TCP SPT=42408 DPT=9882 SEQ=319278619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7A1A0F0000000001030307) Nov 23 04:29:49 localhost python3.9[228579]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:29:49 localhost systemd[1]: Reloading. Nov 23 04:29:49 localhost systemd-rc-local-generator[228604]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:29:49 localhost systemd-sysv-generator[228610]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:29:49 localhost systemd[1]: Starting nova_compute container... Nov 23 04:29:49 localhost systemd[1]: Started libcrun container. Nov 23 04:29:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Nov 23 04:29:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Nov 23 04:29:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Nov 23 04:29:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 04:29:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 23 04:29:49 localhost podman[228620]: 2025-11-23 09:29:49.527673083 +0000 UTC m=+0.116239159 container init e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.build-date=20251118, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 23 04:29:49 localhost podman[228620]: 2025-11-23 09:29:49.537734032 +0000 UTC m=+0.126300098 container start e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:29:49 localhost podman[228620]: nova_compute Nov 23 04:29:49 localhost nova_compute[228635]: + sudo -E kolla_set_configs Nov 23 04:29:49 localhost systemd[1]: Started nova_compute container. Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Validating config file Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Copying service configuration files Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Deleting /etc/nova/nova.conf Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /etc/nova/nova.conf Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Deleting /etc/ceph Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Creating directory /etc/ceph Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /etc/ceph Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Deleting /usr/sbin/iscsiadm Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Writing out command to execute Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Nov 23 04:29:49 localhost nova_compute[228635]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Nov 23 04:29:49 localhost nova_compute[228635]: ++ cat /run_command Nov 23 04:29:49 localhost nova_compute[228635]: + CMD=nova-compute Nov 23 04:29:49 localhost nova_compute[228635]: + ARGS= Nov 23 04:29:49 localhost nova_compute[228635]: + sudo kolla_copy_cacerts Nov 23 04:29:49 localhost nova_compute[228635]: + [[ ! -n '' ]] Nov 23 04:29:49 localhost nova_compute[228635]: + . kolla_extend_start Nov 23 04:29:49 localhost nova_compute[228635]: Running command: 'nova-compute' Nov 23 04:29:49 localhost nova_compute[228635]: + echo 'Running command: '\''nova-compute'\''' Nov 23 04:29:49 localhost nova_compute[228635]: + umask 0022 Nov 23 04:29:49 localhost nova_compute[228635]: + exec nova-compute Nov 23 04:29:50 localhost python3.9[228755]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:29:51 localhost nova_compute[228635]: 2025-11-23 09:29:51.278 228639 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 23 04:29:51 localhost nova_compute[228635]: 2025-11-23 09:29:51.279 228639 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 23 04:29:51 localhost nova_compute[228635]: 2025-11-23 09:29:51.279 228639 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 23 04:29:51 localhost nova_compute[228635]: 2025-11-23 09:29:51.279 228639 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Nov 23 04:29:51 localhost nova_compute[228635]: 2025-11-23 09:29:51.389 228639 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:29:51 localhost nova_compute[228635]: 2025-11-23 09:29:51.411 228639 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:29:51 localhost nova_compute[228635]: 2025-11-23 09:29:51.412 228639 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Nov 23 04:29:51 localhost python3.9[228865]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:29:51 localhost nova_compute[228635]: 2025-11-23 09:29:51.892 228639 INFO nova.virt.driver [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.014 228639 INFO nova.compute.provider_config [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Nov 23 04:29:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13652 DF PROTO=TCP SPT=42356 DPT=9105 SEQ=1710561417 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7A260F0000000001030307) Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.059 228639 WARNING nova.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.060 228639 DEBUG oslo_concurrency.lockutils [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.060 228639 DEBUG oslo_concurrency.lockutils [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.060 228639 DEBUG oslo_concurrency.lockutils [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.060 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.061 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.061 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.061 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.061 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.061 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.062 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.062 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.062 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.062 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.062 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.062 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.063 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.063 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.063 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.063 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.063 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.064 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.064 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.064 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] console_host = np0005532584.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.064 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.064 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.065 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.065 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.065 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.065 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.065 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.066 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.066 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.066 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.066 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.066 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.066 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.067 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.067 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.067 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.067 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.067 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.068 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] host = np0005532584.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.068 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.068 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.068 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.068 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.069 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.069 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.069 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.069 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.069 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.070 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.070 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.070 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.070 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.070 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.071 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.071 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.071 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.071 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.071 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.072 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.072 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.072 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.072 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.072 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.072 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.073 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.073 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.073 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.073 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.073 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.073 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.074 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.074 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.074 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.074 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.074 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.075 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.075 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.075 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.075 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.075 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.076 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] my_block_storage_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.076 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] my_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.076 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.076 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.076 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.077 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.077 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.077 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.077 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.077 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.077 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.078 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.078 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.078 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.078 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.078 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.079 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.079 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.079 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.079 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.079 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.079 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.080 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.080 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.080 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.080 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.080 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.081 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.081 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.081 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.081 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.081 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.081 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.082 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.082 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.082 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.082 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.082 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.083 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.083 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.083 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.083 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.083 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.083 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.084 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.084 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.084 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.084 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.084 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.084 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.085 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.085 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.085 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.085 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.085 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.086 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.086 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.086 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.086 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.086 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.086 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.087 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.087 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.087 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.087 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.087 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.088 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.088 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.088 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.088 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.088 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.089 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.089 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.089 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.089 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.089 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.090 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.090 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.090 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.090 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.090 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.090 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.091 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.091 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.091 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.091 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.091 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.092 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.092 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.092 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.092 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.092 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.093 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.093 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.093 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.093 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.093 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.094 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.094 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.094 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.094 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.094 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.095 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.095 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.095 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.095 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.095 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.096 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.096 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.096 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.096 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.096 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.097 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.097 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.097 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.097 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.097 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.097 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.098 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.098 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.098 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.098 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.098 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.099 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.099 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.099 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.099 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.099 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.100 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.100 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.100 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.100 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.100 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.100 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.101 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.101 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.101 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.101 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.101 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.102 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.102 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.102 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.102 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.102 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.102 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.103 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.103 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.103 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.103 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.103 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.104 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.104 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.104 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.104 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.104 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.105 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.105 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.105 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.105 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.105 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.105 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.106 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.106 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.106 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.106 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.106 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.107 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.107 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.107 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.107 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.107 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.107 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.108 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.108 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.108 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.108 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.108 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.108 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.109 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.109 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.109 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.109 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.109 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.109 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.109 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.110 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.110 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.110 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.110 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.110 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.110 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.110 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.110 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.111 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.111 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.111 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.111 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.111 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.111 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.111 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.112 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.112 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.112 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.112 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.112 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.112 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.112 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.112 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.113 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.113 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.113 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.113 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.113 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.113 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.113 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.114 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.114 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.114 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.114 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.114 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.114 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.114 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.114 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.115 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.115 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.115 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.115 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.115 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.115 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.115 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.116 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.116 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.116 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.116 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.116 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.116 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.116 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.116 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.117 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.117 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.117 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.117 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.117 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.117 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.117 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.117 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.118 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.118 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.118 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.118 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.118 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.118 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.118 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.119 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.119 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.119 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.119 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.119 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.119 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.119 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.119 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.120 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.120 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.120 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.120 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.120 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.120 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.120 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.120 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.121 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.121 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.121 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.121 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.121 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.121 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.122 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.122 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.122 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.122 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.122 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.122 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.122 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.123 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.123 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.123 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.123 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.123 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.123 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.123 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.123 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.124 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.124 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.124 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.124 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.124 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.124 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.124 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.124 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.125 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.125 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.125 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.125 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.125 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.125 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.125 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.125 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.126 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.126 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.126 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.126 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.126 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.126 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.126 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.127 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.127 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.127 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.127 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.127 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.127 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.127 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.128 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.128 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.128 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.128 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.128 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.128 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.128 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.128 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.129 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.129 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.129 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.129 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.129 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.129 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.129 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.129 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.130 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.130 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.130 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.130 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.130 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.130 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.130 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.131 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.131 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.131 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.131 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.131 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.131 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.131 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.131 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.132 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.132 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.132 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.132 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.132 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.132 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.132 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.133 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.133 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.133 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.133 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.133 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.133 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.133 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.133 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.134 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.134 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.134 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.134 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.134 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.134 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.134 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.135 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.135 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.135 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.135 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.135 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.135 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.135 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.135 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.136 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.136 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.136 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.136 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.136 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.136 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.136 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.137 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.137 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.137 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.137 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.137 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.137 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.137 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.137 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.138 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.138 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.138 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.138 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.138 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.138 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.138 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.139 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.139 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.139 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.139 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.139 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.139 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.139 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.139 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.140 228639 WARNING oslo_config.cfg [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Nov 23 04:29:52 localhost nova_compute[228635]: live_migration_uri is deprecated for removal in favor of two other options that Nov 23 04:29:52 localhost nova_compute[228635]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Nov 23 04:29:52 localhost nova_compute[228635]: and ``live_migration_inbound_addr`` respectively. Nov 23 04:29:52 localhost nova_compute[228635]: ). Its value may be silently ignored in the future.#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.140 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.140 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.140 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.140 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.140 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.141 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.141 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.141 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.141 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.141 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.141 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.141 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.142 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.142 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.142 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.142 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.142 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.142 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.142 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.rbd_secret_uuid = 46550e70-79cb-5f55-bf6d-1204b97e083b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.143 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.143 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.143 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.143 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.143 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.143 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.143 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.144 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.144 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.144 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.144 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.144 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.144 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.144 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.145 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.145 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.145 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.145 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.145 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.145 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.145 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.146 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.146 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.146 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.146 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.146 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.146 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.146 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.146 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.147 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.147 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.147 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.147 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.147 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.147 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.147 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.148 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.148 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.148 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.148 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.148 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.148 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.148 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.149 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.149 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.149 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.149 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.149 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.149 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.149 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.149 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.150 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.150 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.150 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.150 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.150 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.150 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.150 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.150 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.151 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.151 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.151 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.151 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.151 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.151 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.151 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.152 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.152 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.152 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.152 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.152 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.152 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.152 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.152 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.153 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.153 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.153 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.153 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.153 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.153 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.153 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.154 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.154 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.154 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.154 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.154 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.154 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.154 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.154 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.155 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.155 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.155 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.155 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.155 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.155 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.155 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.156 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.156 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.156 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.156 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.156 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.156 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.156 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.156 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.157 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.157 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.157 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.157 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.157 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.157 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.157 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.158 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.158 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.158 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.158 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.158 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.158 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.158 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.158 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.159 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.159 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.159 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.159 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.159 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.159 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.159 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.160 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.160 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.160 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.160 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.160 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.160 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.160 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.161 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.161 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.161 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.161 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.161 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.161 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.161 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.162 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.162 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.162 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.162 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.162 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.162 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.162 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.162 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.163 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.163 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.163 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.163 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.163 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.163 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.163 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.164 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.164 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.164 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.164 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.164 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.164 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.164 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.165 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.165 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.165 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.165 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.165 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.165 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.165 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.165 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.166 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.166 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.166 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.166 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.166 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.166 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.166 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.167 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.167 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.167 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.167 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.167 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.167 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.167 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.168 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.168 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.168 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.168 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.168 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.168 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.168 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.169 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.169 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.169 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.169 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.169 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.169 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.169 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.169 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.170 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.170 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.170 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.170 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.170 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.170 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.170 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.171 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.171 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.171 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.171 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.171 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.171 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.171 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.171 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.172 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.172 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.172 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.172 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.172 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.172 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.172 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.172 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.173 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.173 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.173 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.173 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.173 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.173 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.173 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.174 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.174 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.174 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.174 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.174 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vnc.server_proxyclient_address = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.174 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.175 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.175 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.175 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.175 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.175 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.175 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.175 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.175 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.176 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.176 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.176 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.176 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.176 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.176 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.176 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.177 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.177 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.177 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.177 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.177 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.177 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.177 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.177 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.178 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.178 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.178 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.178 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.178 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.178 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.178 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.178 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.179 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.179 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.179 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.179 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.179 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.179 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.179 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.180 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.180 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.180 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.180 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.180 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.180 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.180 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.181 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.181 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.181 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.181 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.181 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.181 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.181 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.181 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.182 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.182 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.182 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.182 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.182 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.182 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.182 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.183 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.183 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.183 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.183 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.183 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.183 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.183 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.183 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.184 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.184 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.184 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.184 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.184 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.184 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.184 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.185 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.185 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.185 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.185 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.185 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.185 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.185 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.185 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.186 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.186 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.186 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.186 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.186 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.186 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.186 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.187 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.187 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.187 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.187 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.187 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.187 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.187 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.187 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.188 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.188 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.188 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.188 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.188 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.188 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.188 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.189 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.189 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.189 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.189 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.189 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.189 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.189 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.189 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.190 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.190 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.190 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.190 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.190 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.190 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.190 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.190 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.191 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.191 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.191 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.191 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.191 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.191 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.191 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.192 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.192 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.192 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.192 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.192 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.192 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.192 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.192 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.193 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.193 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.193 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.193 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.193 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.193 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.193 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.194 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.194 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.194 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.194 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.194 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.194 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.194 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.194 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.195 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.195 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.195 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.195 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.195 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.195 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.195 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.196 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.196 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.196 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.196 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.196 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.196 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.196 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.196 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.197 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.197 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.197 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.197 228639 DEBUG oslo_service.service [None req-72d24dca-cb80-455e-8b0e-f34c5e452a66 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.198 228639 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.207 228639 INFO nova.virt.node [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Determined node identity c90c5769-42ab-40e9-92fc-3d82b4e96052 from /var/lib/nova/compute_id#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.208 228639 DEBUG nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.208 228639 DEBUG nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.208 228639 DEBUG nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.209 228639 DEBUG nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Nov 23 04:29:52 localhost systemd[1]: Started libvirt QEMU daemon. Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.267 228639 DEBUG nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.269 228639 DEBUG nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.270 228639 INFO nova.virt.libvirt.driver [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Connection event '1' reason 'None'#033[00m Nov 23 04:29:52 localhost nova_compute[228635]: 2025-11-23 09:29:52.282 228639 DEBUG nova.virt.libvirt.volume.mount [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.171 228639 INFO nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Libvirt host capabilities Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: df69e9ed-ec8d-43d9-8710-8ff360287019 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: x86_64 Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v4 Nov 23 04:29:53 localhost nova_compute[228635]: AMD Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: tcp Nov 23 04:29:53 localhost nova_compute[228635]: rdma Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: 16116612 Nov 23 04:29:53 localhost nova_compute[228635]: 4029153 Nov 23 04:29:53 localhost nova_compute[228635]: 0 Nov 23 04:29:53 localhost nova_compute[228635]: 0 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: selinux Nov 23 04:29:53 localhost nova_compute[228635]: 0 Nov 23 04:29:53 localhost nova_compute[228635]: system_u:system_r:svirt_t:s0 Nov 23 04:29:53 localhost nova_compute[228635]: system_u:system_r:svirt_tcg_t:s0 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: dac Nov 23 04:29:53 localhost nova_compute[228635]: 0 Nov 23 04:29:53 localhost nova_compute[228635]: +107:+107 Nov 23 04:29:53 localhost nova_compute[228635]: +107:+107 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: hvm Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: 32 Nov 23 04:29:53 localhost nova_compute[228635]: /usr/libexec/qemu-kvm Nov 23 04:29:53 localhost nova_compute[228635]: pc-i440fx-rhel7.6.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel9.8.0 Nov 23 04:29:53 localhost nova_compute[228635]: q35 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel9.6.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel8.6.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel9.4.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel8.5.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel8.3.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel7.6.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel8.4.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel9.2.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel8.2.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel9.0.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel8.0.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel8.1.0 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: hvm Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: 64 Nov 23 04:29:53 localhost nova_compute[228635]: /usr/libexec/qemu-kvm Nov 23 04:29:53 localhost nova_compute[228635]: pc-i440fx-rhel7.6.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel9.8.0 Nov 23 04:29:53 localhost nova_compute[228635]: q35 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel9.6.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel8.6.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel9.4.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel8.5.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel8.3.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel7.6.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel8.4.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel9.2.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel8.2.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel9.0.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel8.0.0 Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel8.1.0 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: #033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.181 228639 DEBUG nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.199 228639 DEBUG nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: /usr/libexec/qemu-kvm Nov 23 04:29:53 localhost nova_compute[228635]: kvm Nov 23 04:29:53 localhost nova_compute[228635]: pc-i440fx-rhel7.6.0 Nov 23 04:29:53 localhost nova_compute[228635]: i686 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: rom Nov 23 04:29:53 localhost nova_compute[228635]: pflash Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: yes Nov 23 04:29:53 localhost nova_compute[228635]: no Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: no Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: on Nov 23 04:29:53 localhost nova_compute[228635]: off Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: on Nov 23 04:29:53 localhost nova_compute[228635]: off Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome Nov 23 04:29:53 localhost nova_compute[228635]: AMD Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: 486 Nov 23 04:29:53 localhost nova_compute[228635]: 486-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-noTSX Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-noTSX-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-noTSX Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v5 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Conroe Nov 23 04:29:53 localhost nova_compute[228635]: Conroe-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Cooperlake Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cooperlake-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cooperlake-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Denverton Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Denverton-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Denverton-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Denverton-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Dhyana Nov 23 04:29:53 localhost nova_compute[228635]: Dhyana-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Dhyana-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Genoa Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Genoa-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-IBPB Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Milan Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Milan-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Milan-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v4 Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-v1 Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-v2 Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: GraniteRapids Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: GraniteRapids-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: GraniteRapids-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-noTSX Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-noTSX-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-noTSX Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v5 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v6 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v7 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: IvyBridge Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: IvyBridge-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: IvyBridge-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: IvyBridge-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: KnightsMill Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: KnightsMill-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nehalem Nov 23 04:29:53 localhost nova_compute[228635]: Nehalem-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nehalem-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nehalem-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G1 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G1-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G2 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G2-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G3 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G3-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G4-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G5 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G5-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Penryn Nov 23 04:29:53 localhost nova_compute[228635]: Penryn-v1 Nov 23 04:29:53 localhost nova_compute[228635]: SandyBridge Nov 23 04:29:53 localhost nova_compute[228635]: SandyBridge-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: SandyBridge-v1 Nov 23 04:29:53 localhost nova_compute[228635]: SandyBridge-v2 Nov 23 04:29:53 localhost nova_compute[228635]: SapphireRapids Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SapphireRapids-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SapphireRapids-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SapphireRapids-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SierraForest Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SierraForest-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-noTSX-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-noTSX-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v5 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Westmere Nov 23 04:29:53 localhost nova_compute[228635]: Westmere-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Westmere-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Westmere-v2 Nov 23 04:29:53 localhost nova_compute[228635]: athlon Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: athlon-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: core2duo Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: core2duo-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: coreduo Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: coreduo-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: kvm32 Nov 23 04:29:53 localhost nova_compute[228635]: kvm32-v1 Nov 23 04:29:53 localhost nova_compute[228635]: kvm64 Nov 23 04:29:53 localhost nova_compute[228635]: kvm64-v1 Nov 23 04:29:53 localhost nova_compute[228635]: n270 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: n270-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: pentium Nov 23 04:29:53 localhost nova_compute[228635]: pentium-v1 Nov 23 04:29:53 localhost nova_compute[228635]: pentium2 Nov 23 04:29:53 localhost nova_compute[228635]: pentium2-v1 Nov 23 04:29:53 localhost nova_compute[228635]: pentium3 Nov 23 04:29:53 localhost nova_compute[228635]: pentium3-v1 Nov 23 04:29:53 localhost nova_compute[228635]: phenom Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: phenom-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: qemu32 Nov 23 04:29:53 localhost nova_compute[228635]: qemu32-v1 Nov 23 04:29:53 localhost nova_compute[228635]: qemu64 Nov 23 04:29:53 localhost nova_compute[228635]: qemu64-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: file Nov 23 04:29:53 localhost nova_compute[228635]: anonymous Nov 23 04:29:53 localhost nova_compute[228635]: memfd Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: disk Nov 23 04:29:53 localhost nova_compute[228635]: cdrom Nov 23 04:29:53 localhost nova_compute[228635]: floppy Nov 23 04:29:53 localhost nova_compute[228635]: lun Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: ide Nov 23 04:29:53 localhost nova_compute[228635]: fdc Nov 23 04:29:53 localhost nova_compute[228635]: scsi Nov 23 04:29:53 localhost nova_compute[228635]: virtio Nov 23 04:29:53 localhost nova_compute[228635]: usb Nov 23 04:29:53 localhost nova_compute[228635]: sata Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: virtio Nov 23 04:29:53 localhost nova_compute[228635]: virtio-transitional Nov 23 04:29:53 localhost nova_compute[228635]: virtio-non-transitional Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: vnc Nov 23 04:29:53 localhost nova_compute[228635]: egl-headless Nov 23 04:29:53 localhost nova_compute[228635]: dbus Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: subsystem Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: default Nov 23 04:29:53 localhost nova_compute[228635]: mandatory Nov 23 04:29:53 localhost nova_compute[228635]: requisite Nov 23 04:29:53 localhost nova_compute[228635]: optional Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: usb Nov 23 04:29:53 localhost nova_compute[228635]: pci Nov 23 04:29:53 localhost nova_compute[228635]: scsi Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: virtio Nov 23 04:29:53 localhost nova_compute[228635]: virtio-transitional Nov 23 04:29:53 localhost nova_compute[228635]: virtio-non-transitional Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: random Nov 23 04:29:53 localhost nova_compute[228635]: egd Nov 23 04:29:53 localhost nova_compute[228635]: builtin Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: path Nov 23 04:29:53 localhost nova_compute[228635]: handle Nov 23 04:29:53 localhost nova_compute[228635]: virtiofs Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: tpm-tis Nov 23 04:29:53 localhost nova_compute[228635]: tpm-crb Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: emulator Nov 23 04:29:53 localhost nova_compute[228635]: external Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: 2.0 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: usb Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: pty Nov 23 04:29:53 localhost nova_compute[228635]: unix Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: qemu Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: builtin Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: default Nov 23 04:29:53 localhost nova_compute[228635]: passt Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: isa Nov 23 04:29:53 localhost nova_compute[228635]: hyperv Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: null Nov 23 04:29:53 localhost nova_compute[228635]: vc Nov 23 04:29:53 localhost nova_compute[228635]: pty Nov 23 04:29:53 localhost nova_compute[228635]: dev Nov 23 04:29:53 localhost nova_compute[228635]: file Nov 23 04:29:53 localhost nova_compute[228635]: pipe Nov 23 04:29:53 localhost nova_compute[228635]: stdio Nov 23 04:29:53 localhost nova_compute[228635]: udp Nov 23 04:29:53 localhost nova_compute[228635]: tcp Nov 23 04:29:53 localhost nova_compute[228635]: unix Nov 23 04:29:53 localhost nova_compute[228635]: qemu-vdagent Nov 23 04:29:53 localhost nova_compute[228635]: dbus Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: relaxed Nov 23 04:29:53 localhost nova_compute[228635]: vapic Nov 23 04:29:53 localhost nova_compute[228635]: spinlocks Nov 23 04:29:53 localhost nova_compute[228635]: vpindex Nov 23 04:29:53 localhost nova_compute[228635]: runtime Nov 23 04:29:53 localhost nova_compute[228635]: synic Nov 23 04:29:53 localhost nova_compute[228635]: stimer Nov 23 04:29:53 localhost nova_compute[228635]: reset Nov 23 04:29:53 localhost nova_compute[228635]: vendor_id Nov 23 04:29:53 localhost nova_compute[228635]: frequencies Nov 23 04:29:53 localhost nova_compute[228635]: reenlightenment Nov 23 04:29:53 localhost nova_compute[228635]: tlbflush Nov 23 04:29:53 localhost nova_compute[228635]: ipi Nov 23 04:29:53 localhost nova_compute[228635]: avic Nov 23 04:29:53 localhost nova_compute[228635]: emsr_bitmap Nov 23 04:29:53 localhost nova_compute[228635]: xmm_input Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: 4095 Nov 23 04:29:53 localhost nova_compute[228635]: on Nov 23 04:29:53 localhost nova_compute[228635]: off Nov 23 04:29:53 localhost nova_compute[228635]: off Nov 23 04:29:53 localhost nova_compute[228635]: Linux KVM Hv Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: tdx Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.207 228639 DEBUG nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: /usr/libexec/qemu-kvm Nov 23 04:29:53 localhost nova_compute[228635]: kvm Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel9.8.0 Nov 23 04:29:53 localhost nova_compute[228635]: i686 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: rom Nov 23 04:29:53 localhost nova_compute[228635]: pflash Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: yes Nov 23 04:29:53 localhost nova_compute[228635]: no Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: no Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: on Nov 23 04:29:53 localhost nova_compute[228635]: off Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: on Nov 23 04:29:53 localhost nova_compute[228635]: off Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome Nov 23 04:29:53 localhost nova_compute[228635]: AMD Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: 486 Nov 23 04:29:53 localhost nova_compute[228635]: 486-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-noTSX Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-noTSX-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-noTSX Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v5 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Conroe Nov 23 04:29:53 localhost nova_compute[228635]: Conroe-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Cooperlake Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cooperlake-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cooperlake-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Denverton Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Denverton-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Denverton-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Denverton-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Dhyana Nov 23 04:29:53 localhost nova_compute[228635]: Dhyana-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Dhyana-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Genoa Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Genoa-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-IBPB Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Milan Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Milan-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Milan-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v4 Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-v1 Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-v2 Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: GraniteRapids Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: GraniteRapids-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: GraniteRapids-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-noTSX Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-noTSX-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-noTSX Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v5 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v6 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v7 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: IvyBridge Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: IvyBridge-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: IvyBridge-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: IvyBridge-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: KnightsMill Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: KnightsMill-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nehalem Nov 23 04:29:53 localhost nova_compute[228635]: Nehalem-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nehalem-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nehalem-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G1 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G1-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G2 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G2-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G3 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G3-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G4-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G5 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G5-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Penryn Nov 23 04:29:53 localhost nova_compute[228635]: Penryn-v1 Nov 23 04:29:53 localhost nova_compute[228635]: SandyBridge Nov 23 04:29:53 localhost nova_compute[228635]: SandyBridge-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: SandyBridge-v1 Nov 23 04:29:53 localhost nova_compute[228635]: SandyBridge-v2 Nov 23 04:29:53 localhost nova_compute[228635]: SapphireRapids Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SapphireRapids-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SapphireRapids-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SapphireRapids-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SierraForest Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SierraForest-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-noTSX-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-noTSX-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v5 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Westmere Nov 23 04:29:53 localhost nova_compute[228635]: Westmere-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Westmere-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Westmere-v2 Nov 23 04:29:53 localhost nova_compute[228635]: athlon Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: athlon-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: core2duo Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: core2duo-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: coreduo Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: coreduo-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: kvm32 Nov 23 04:29:53 localhost nova_compute[228635]: kvm32-v1 Nov 23 04:29:53 localhost nova_compute[228635]: kvm64 Nov 23 04:29:53 localhost nova_compute[228635]: kvm64-v1 Nov 23 04:29:53 localhost nova_compute[228635]: n270 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: n270-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: pentium Nov 23 04:29:53 localhost nova_compute[228635]: pentium-v1 Nov 23 04:29:53 localhost nova_compute[228635]: pentium2 Nov 23 04:29:53 localhost nova_compute[228635]: pentium2-v1 Nov 23 04:29:53 localhost nova_compute[228635]: pentium3 Nov 23 04:29:53 localhost nova_compute[228635]: pentium3-v1 Nov 23 04:29:53 localhost nova_compute[228635]: phenom Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: phenom-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: qemu32 Nov 23 04:29:53 localhost nova_compute[228635]: qemu32-v1 Nov 23 04:29:53 localhost nova_compute[228635]: qemu64 Nov 23 04:29:53 localhost nova_compute[228635]: qemu64-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: file Nov 23 04:29:53 localhost nova_compute[228635]: anonymous Nov 23 04:29:53 localhost nova_compute[228635]: memfd Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: disk Nov 23 04:29:53 localhost nova_compute[228635]: cdrom Nov 23 04:29:53 localhost nova_compute[228635]: floppy Nov 23 04:29:53 localhost nova_compute[228635]: lun Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: fdc Nov 23 04:29:53 localhost nova_compute[228635]: scsi Nov 23 04:29:53 localhost nova_compute[228635]: virtio Nov 23 04:29:53 localhost nova_compute[228635]: usb Nov 23 04:29:53 localhost nova_compute[228635]: sata Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: virtio Nov 23 04:29:53 localhost nova_compute[228635]: virtio-transitional Nov 23 04:29:53 localhost nova_compute[228635]: virtio-non-transitional Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: vnc Nov 23 04:29:53 localhost nova_compute[228635]: egl-headless Nov 23 04:29:53 localhost nova_compute[228635]: dbus Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: subsystem Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: default Nov 23 04:29:53 localhost nova_compute[228635]: mandatory Nov 23 04:29:53 localhost nova_compute[228635]: requisite Nov 23 04:29:53 localhost nova_compute[228635]: optional Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: usb Nov 23 04:29:53 localhost nova_compute[228635]: pci Nov 23 04:29:53 localhost nova_compute[228635]: scsi Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: virtio Nov 23 04:29:53 localhost nova_compute[228635]: virtio-transitional Nov 23 04:29:53 localhost nova_compute[228635]: virtio-non-transitional Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: random Nov 23 04:29:53 localhost nova_compute[228635]: egd Nov 23 04:29:53 localhost nova_compute[228635]: builtin Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: path Nov 23 04:29:53 localhost nova_compute[228635]: handle Nov 23 04:29:53 localhost nova_compute[228635]: virtiofs Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: tpm-tis Nov 23 04:29:53 localhost nova_compute[228635]: tpm-crb Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: emulator Nov 23 04:29:53 localhost nova_compute[228635]: external Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: 2.0 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: usb Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: pty Nov 23 04:29:53 localhost nova_compute[228635]: unix Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: qemu Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: builtin Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: default Nov 23 04:29:53 localhost nova_compute[228635]: passt Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: isa Nov 23 04:29:53 localhost nova_compute[228635]: hyperv Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: null Nov 23 04:29:53 localhost nova_compute[228635]: vc Nov 23 04:29:53 localhost nova_compute[228635]: pty Nov 23 04:29:53 localhost nova_compute[228635]: dev Nov 23 04:29:53 localhost nova_compute[228635]: file Nov 23 04:29:53 localhost nova_compute[228635]: pipe Nov 23 04:29:53 localhost nova_compute[228635]: stdio Nov 23 04:29:53 localhost nova_compute[228635]: udp Nov 23 04:29:53 localhost nova_compute[228635]: tcp Nov 23 04:29:53 localhost nova_compute[228635]: unix Nov 23 04:29:53 localhost nova_compute[228635]: qemu-vdagent Nov 23 04:29:53 localhost nova_compute[228635]: dbus Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: relaxed Nov 23 04:29:53 localhost nova_compute[228635]: vapic Nov 23 04:29:53 localhost nova_compute[228635]: spinlocks Nov 23 04:29:53 localhost nova_compute[228635]: vpindex Nov 23 04:29:53 localhost nova_compute[228635]: runtime Nov 23 04:29:53 localhost nova_compute[228635]: synic Nov 23 04:29:53 localhost nova_compute[228635]: stimer Nov 23 04:29:53 localhost nova_compute[228635]: reset Nov 23 04:29:53 localhost nova_compute[228635]: vendor_id Nov 23 04:29:53 localhost nova_compute[228635]: frequencies Nov 23 04:29:53 localhost nova_compute[228635]: reenlightenment Nov 23 04:29:53 localhost nova_compute[228635]: tlbflush Nov 23 04:29:53 localhost nova_compute[228635]: ipi Nov 23 04:29:53 localhost nova_compute[228635]: avic Nov 23 04:29:53 localhost nova_compute[228635]: emsr_bitmap Nov 23 04:29:53 localhost nova_compute[228635]: xmm_input Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: 4095 Nov 23 04:29:53 localhost nova_compute[228635]: on Nov 23 04:29:53 localhost nova_compute[228635]: off Nov 23 04:29:53 localhost nova_compute[228635]: off Nov 23 04:29:53 localhost nova_compute[228635]: Linux KVM Hv Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: tdx Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.247 228639 DEBUG nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.250 228639 DEBUG nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: /usr/libexec/qemu-kvm Nov 23 04:29:53 localhost nova_compute[228635]: kvm Nov 23 04:29:53 localhost nova_compute[228635]: pc-i440fx-rhel7.6.0 Nov 23 04:29:53 localhost nova_compute[228635]: x86_64 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: rom Nov 23 04:29:53 localhost nova_compute[228635]: pflash Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: yes Nov 23 04:29:53 localhost nova_compute[228635]: no Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: no Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: on Nov 23 04:29:53 localhost nova_compute[228635]: off Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: on Nov 23 04:29:53 localhost nova_compute[228635]: off Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome Nov 23 04:29:53 localhost nova_compute[228635]: AMD Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: 486 Nov 23 04:29:53 localhost nova_compute[228635]: 486-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-noTSX Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-noTSX-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-noTSX Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v5 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Conroe Nov 23 04:29:53 localhost nova_compute[228635]: Conroe-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Cooperlake Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cooperlake-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cooperlake-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Denverton Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Denverton-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Denverton-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Denverton-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Dhyana Nov 23 04:29:53 localhost nova_compute[228635]: Dhyana-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Dhyana-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Genoa Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Genoa-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-IBPB Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Milan Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Milan-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Milan-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v4 Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-v1 Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-v2 Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: GraniteRapids Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: GraniteRapids-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: GraniteRapids-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-noTSX Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-noTSX-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-noTSX Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v5 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v6 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v7 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: IvyBridge Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: IvyBridge-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: IvyBridge-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: IvyBridge-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: KnightsMill Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: KnightsMill-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nehalem Nov 23 04:29:53 localhost nova_compute[228635]: Nehalem-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nehalem-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nehalem-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G1 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G1-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G2 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G2-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G3 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G3-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G4-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G5 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G5-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Penryn Nov 23 04:29:53 localhost nova_compute[228635]: Penryn-v1 Nov 23 04:29:53 localhost nova_compute[228635]: SandyBridge Nov 23 04:29:53 localhost nova_compute[228635]: SandyBridge-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: SandyBridge-v1 Nov 23 04:29:53 localhost nova_compute[228635]: SandyBridge-v2 Nov 23 04:29:53 localhost nova_compute[228635]: SapphireRapids Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SapphireRapids-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SapphireRapids-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SapphireRapids-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SierraForest Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SierraForest-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-noTSX-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-noTSX-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v5 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Westmere Nov 23 04:29:53 localhost nova_compute[228635]: Westmere-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Westmere-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Westmere-v2 Nov 23 04:29:53 localhost nova_compute[228635]: athlon Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: athlon-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: core2duo Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: core2duo-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: coreduo Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: coreduo-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: kvm32 Nov 23 04:29:53 localhost nova_compute[228635]: kvm32-v1 Nov 23 04:29:53 localhost nova_compute[228635]: kvm64 Nov 23 04:29:53 localhost nova_compute[228635]: kvm64-v1 Nov 23 04:29:53 localhost nova_compute[228635]: n270 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: n270-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: pentium Nov 23 04:29:53 localhost nova_compute[228635]: pentium-v1 Nov 23 04:29:53 localhost nova_compute[228635]: pentium2 Nov 23 04:29:53 localhost nova_compute[228635]: pentium2-v1 Nov 23 04:29:53 localhost nova_compute[228635]: pentium3 Nov 23 04:29:53 localhost nova_compute[228635]: pentium3-v1 Nov 23 04:29:53 localhost nova_compute[228635]: phenom Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: phenom-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: qemu32 Nov 23 04:29:53 localhost nova_compute[228635]: qemu32-v1 Nov 23 04:29:53 localhost nova_compute[228635]: qemu64 Nov 23 04:29:53 localhost nova_compute[228635]: qemu64-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: file Nov 23 04:29:53 localhost nova_compute[228635]: anonymous Nov 23 04:29:53 localhost nova_compute[228635]: memfd Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: disk Nov 23 04:29:53 localhost nova_compute[228635]: cdrom Nov 23 04:29:53 localhost nova_compute[228635]: floppy Nov 23 04:29:53 localhost nova_compute[228635]: lun Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: ide Nov 23 04:29:53 localhost nova_compute[228635]: fdc Nov 23 04:29:53 localhost nova_compute[228635]: scsi Nov 23 04:29:53 localhost nova_compute[228635]: virtio Nov 23 04:29:53 localhost nova_compute[228635]: usb Nov 23 04:29:53 localhost nova_compute[228635]: sata Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: virtio Nov 23 04:29:53 localhost nova_compute[228635]: virtio-transitional Nov 23 04:29:53 localhost nova_compute[228635]: virtio-non-transitional Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: vnc Nov 23 04:29:53 localhost nova_compute[228635]: egl-headless Nov 23 04:29:53 localhost nova_compute[228635]: dbus Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: subsystem Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: default Nov 23 04:29:53 localhost nova_compute[228635]: mandatory Nov 23 04:29:53 localhost nova_compute[228635]: requisite Nov 23 04:29:53 localhost nova_compute[228635]: optional Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: usb Nov 23 04:29:53 localhost nova_compute[228635]: pci Nov 23 04:29:53 localhost nova_compute[228635]: scsi Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: virtio Nov 23 04:29:53 localhost nova_compute[228635]: virtio-transitional Nov 23 04:29:53 localhost nova_compute[228635]: virtio-non-transitional Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: random Nov 23 04:29:53 localhost nova_compute[228635]: egd Nov 23 04:29:53 localhost nova_compute[228635]: builtin Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: path Nov 23 04:29:53 localhost nova_compute[228635]: handle Nov 23 04:29:53 localhost nova_compute[228635]: virtiofs Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: tpm-tis Nov 23 04:29:53 localhost nova_compute[228635]: tpm-crb Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: emulator Nov 23 04:29:53 localhost nova_compute[228635]: external Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: 2.0 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: usb Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: pty Nov 23 04:29:53 localhost nova_compute[228635]: unix Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: qemu Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: builtin Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: default Nov 23 04:29:53 localhost nova_compute[228635]: passt Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: isa Nov 23 04:29:53 localhost nova_compute[228635]: hyperv Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: null Nov 23 04:29:53 localhost nova_compute[228635]: vc Nov 23 04:29:53 localhost nova_compute[228635]: pty Nov 23 04:29:53 localhost nova_compute[228635]: dev Nov 23 04:29:53 localhost nova_compute[228635]: file Nov 23 04:29:53 localhost nova_compute[228635]: pipe Nov 23 04:29:53 localhost nova_compute[228635]: stdio Nov 23 04:29:53 localhost nova_compute[228635]: udp Nov 23 04:29:53 localhost nova_compute[228635]: tcp Nov 23 04:29:53 localhost nova_compute[228635]: unix Nov 23 04:29:53 localhost nova_compute[228635]: qemu-vdagent Nov 23 04:29:53 localhost nova_compute[228635]: dbus Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: relaxed Nov 23 04:29:53 localhost nova_compute[228635]: vapic Nov 23 04:29:53 localhost nova_compute[228635]: spinlocks Nov 23 04:29:53 localhost nova_compute[228635]: vpindex Nov 23 04:29:53 localhost nova_compute[228635]: runtime Nov 23 04:29:53 localhost nova_compute[228635]: synic Nov 23 04:29:53 localhost nova_compute[228635]: stimer Nov 23 04:29:53 localhost nova_compute[228635]: reset Nov 23 04:29:53 localhost nova_compute[228635]: vendor_id Nov 23 04:29:53 localhost nova_compute[228635]: frequencies Nov 23 04:29:53 localhost nova_compute[228635]: reenlightenment Nov 23 04:29:53 localhost nova_compute[228635]: tlbflush Nov 23 04:29:53 localhost nova_compute[228635]: ipi Nov 23 04:29:53 localhost nova_compute[228635]: avic Nov 23 04:29:53 localhost nova_compute[228635]: emsr_bitmap Nov 23 04:29:53 localhost nova_compute[228635]: xmm_input Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: 4095 Nov 23 04:29:53 localhost nova_compute[228635]: on Nov 23 04:29:53 localhost nova_compute[228635]: off Nov 23 04:29:53 localhost nova_compute[228635]: off Nov 23 04:29:53 localhost nova_compute[228635]: Linux KVM Hv Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: tdx Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.318 228639 DEBUG nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: /usr/libexec/qemu-kvm Nov 23 04:29:53 localhost nova_compute[228635]: kvm Nov 23 04:29:53 localhost nova_compute[228635]: pc-q35-rhel9.8.0 Nov 23 04:29:53 localhost nova_compute[228635]: x86_64 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: efi Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Nov 23 04:29:53 localhost nova_compute[228635]: /usr/share/edk2/ovmf/OVMF_CODE.fd Nov 23 04:29:53 localhost nova_compute[228635]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Nov 23 04:29:53 localhost nova_compute[228635]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: rom Nov 23 04:29:53 localhost nova_compute[228635]: pflash Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: yes Nov 23 04:29:53 localhost nova_compute[228635]: no Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: yes Nov 23 04:29:53 localhost nova_compute[228635]: no Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: on Nov 23 04:29:53 localhost nova_compute[228635]: off Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: on Nov 23 04:29:53 localhost nova_compute[228635]: off Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome Nov 23 04:29:53 localhost nova_compute[228635]: AMD Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: 486 Nov 23 04:29:53 localhost nova_compute[228635]: 486-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-noTSX Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-noTSX-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Broadwell-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-noTSX Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cascadelake-Server-v5 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Conroe Nov 23 04:29:53 localhost nova_compute[228635]: Conroe-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Cooperlake Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cooperlake-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Cooperlake-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Denverton Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Denverton-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Denverton-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Denverton-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Dhyana Nov 23 04:29:53 localhost nova_compute[228635]: Dhyana-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Dhyana-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Genoa Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Genoa-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-IBPB Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Milan Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Milan-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Milan-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-Rome-v4 Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-v1 Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-v2 Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: EPYC-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: GraniteRapids Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: GraniteRapids-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: GraniteRapids-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-noTSX Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-noTSX-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Haswell-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-noTSX Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v5 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v6 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Icelake-Server-v7 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: IvyBridge Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: IvyBridge-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: IvyBridge-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: IvyBridge-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: KnightsMill Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: KnightsMill-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nehalem Nov 23 04:29:53 localhost nova_compute[228635]: Nehalem-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nehalem-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nehalem-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G1 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G1-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G2 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G2-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G3 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G3-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G4-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G5 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Opteron_G5-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Penryn Nov 23 04:29:53 localhost nova_compute[228635]: Penryn-v1 Nov 23 04:29:53 localhost nova_compute[228635]: SandyBridge Nov 23 04:29:53 localhost nova_compute[228635]: SandyBridge-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: SandyBridge-v1 Nov 23 04:29:53 localhost nova_compute[228635]: SandyBridge-v2 Nov 23 04:29:53 localhost nova_compute[228635]: SapphireRapids Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SapphireRapids-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SapphireRapids-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SapphireRapids-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SierraForest Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: SierraForest-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-noTSX-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Client-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-noTSX-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Skylake-Server-v5 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge-v2 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge-v3 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Snowridge-v4 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Westmere Nov 23 04:29:53 localhost nova_compute[228635]: Westmere-IBRS Nov 23 04:29:53 localhost nova_compute[228635]: Westmere-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Westmere-v2 Nov 23 04:29:53 localhost nova_compute[228635]: athlon Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: athlon-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: core2duo Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: core2duo-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: coreduo Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: coreduo-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: kvm32 Nov 23 04:29:53 localhost nova_compute[228635]: kvm32-v1 Nov 23 04:29:53 localhost nova_compute[228635]: kvm64 Nov 23 04:29:53 localhost nova_compute[228635]: kvm64-v1 Nov 23 04:29:53 localhost nova_compute[228635]: n270 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: n270-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: pentium Nov 23 04:29:53 localhost nova_compute[228635]: pentium-v1 Nov 23 04:29:53 localhost nova_compute[228635]: pentium2 Nov 23 04:29:53 localhost nova_compute[228635]: pentium2-v1 Nov 23 04:29:53 localhost nova_compute[228635]: pentium3 Nov 23 04:29:53 localhost nova_compute[228635]: pentium3-v1 Nov 23 04:29:53 localhost nova_compute[228635]: phenom Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: phenom-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: qemu32 Nov 23 04:29:53 localhost nova_compute[228635]: qemu32-v1 Nov 23 04:29:53 localhost nova_compute[228635]: qemu64 Nov 23 04:29:53 localhost nova_compute[228635]: qemu64-v1 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: file Nov 23 04:29:53 localhost nova_compute[228635]: anonymous Nov 23 04:29:53 localhost nova_compute[228635]: memfd Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: disk Nov 23 04:29:53 localhost nova_compute[228635]: cdrom Nov 23 04:29:53 localhost nova_compute[228635]: floppy Nov 23 04:29:53 localhost nova_compute[228635]: lun Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: fdc Nov 23 04:29:53 localhost nova_compute[228635]: scsi Nov 23 04:29:53 localhost nova_compute[228635]: virtio Nov 23 04:29:53 localhost nova_compute[228635]: usb Nov 23 04:29:53 localhost nova_compute[228635]: sata Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: virtio Nov 23 04:29:53 localhost nova_compute[228635]: virtio-transitional Nov 23 04:29:53 localhost nova_compute[228635]: virtio-non-transitional Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: vnc Nov 23 04:29:53 localhost nova_compute[228635]: egl-headless Nov 23 04:29:53 localhost nova_compute[228635]: dbus Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: subsystem Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: default Nov 23 04:29:53 localhost nova_compute[228635]: mandatory Nov 23 04:29:53 localhost nova_compute[228635]: requisite Nov 23 04:29:53 localhost nova_compute[228635]: optional Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: usb Nov 23 04:29:53 localhost nova_compute[228635]: pci Nov 23 04:29:53 localhost nova_compute[228635]: scsi Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: virtio Nov 23 04:29:53 localhost nova_compute[228635]: virtio-transitional Nov 23 04:29:53 localhost nova_compute[228635]: virtio-non-transitional Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: random Nov 23 04:29:53 localhost nova_compute[228635]: egd Nov 23 04:29:53 localhost nova_compute[228635]: builtin Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: path Nov 23 04:29:53 localhost nova_compute[228635]: handle Nov 23 04:29:53 localhost nova_compute[228635]: virtiofs Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: tpm-tis Nov 23 04:29:53 localhost nova_compute[228635]: tpm-crb Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: emulator Nov 23 04:29:53 localhost nova_compute[228635]: external Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: 2.0 Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: usb Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: pty Nov 23 04:29:53 localhost nova_compute[228635]: unix Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: qemu Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: builtin Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: default Nov 23 04:29:53 localhost nova_compute[228635]: passt Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: isa Nov 23 04:29:53 localhost nova_compute[228635]: hyperv Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: null Nov 23 04:29:53 localhost nova_compute[228635]: vc Nov 23 04:29:53 localhost nova_compute[228635]: pty Nov 23 04:29:53 localhost nova_compute[228635]: dev Nov 23 04:29:53 localhost nova_compute[228635]: file Nov 23 04:29:53 localhost nova_compute[228635]: pipe Nov 23 04:29:53 localhost nova_compute[228635]: stdio Nov 23 04:29:53 localhost nova_compute[228635]: udp Nov 23 04:29:53 localhost nova_compute[228635]: tcp Nov 23 04:29:53 localhost nova_compute[228635]: unix Nov 23 04:29:53 localhost nova_compute[228635]: qemu-vdagent Nov 23 04:29:53 localhost nova_compute[228635]: dbus Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: relaxed Nov 23 04:29:53 localhost nova_compute[228635]: vapic Nov 23 04:29:53 localhost nova_compute[228635]: spinlocks Nov 23 04:29:53 localhost nova_compute[228635]: vpindex Nov 23 04:29:53 localhost nova_compute[228635]: runtime Nov 23 04:29:53 localhost nova_compute[228635]: synic Nov 23 04:29:53 localhost nova_compute[228635]: stimer Nov 23 04:29:53 localhost nova_compute[228635]: reset Nov 23 04:29:53 localhost nova_compute[228635]: vendor_id Nov 23 04:29:53 localhost nova_compute[228635]: frequencies Nov 23 04:29:53 localhost nova_compute[228635]: reenlightenment Nov 23 04:29:53 localhost nova_compute[228635]: tlbflush Nov 23 04:29:53 localhost nova_compute[228635]: ipi Nov 23 04:29:53 localhost nova_compute[228635]: avic Nov 23 04:29:53 localhost nova_compute[228635]: emsr_bitmap Nov 23 04:29:53 localhost nova_compute[228635]: xmm_input Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: 4095 Nov 23 04:29:53 localhost nova_compute[228635]: on Nov 23 04:29:53 localhost nova_compute[228635]: off Nov 23 04:29:53 localhost nova_compute[228635]: off Nov 23 04:29:53 localhost nova_compute[228635]: Linux KVM Hv Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: tdx Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: Nov 23 04:29:53 localhost nova_compute[228635]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.366 228639 DEBUG nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.367 228639 DEBUG nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.367 228639 DEBUG nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.367 228639 INFO nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Secure Boot support detected#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.369 228639 INFO nova.virt.libvirt.driver [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.369 228639 INFO nova.virt.libvirt.driver [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.380 228639 DEBUG nova.virt.libvirt.driver [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.400 228639 INFO nova.virt.node [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Determined node identity c90c5769-42ab-40e9-92fc-3d82b4e96052 from /var/lib/nova/compute_id#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.418 228639 DEBUG nova.compute.manager [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Verified node c90c5769-42ab-40e9-92fc-3d82b4e96052 matches my host np0005532584.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.439 228639 INFO nova.compute.manager [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Nov 23 04:29:53 localhost nova_compute[228635]: 2025-11-23 09:29:53.892 228639 INFO nova.service [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Updating service version for nova-compute on np0005532584.localdomain from 57 to 66#033[00m Nov 23 04:29:54 localhost nova_compute[228635]: 2025-11-23 09:29:54.122 228639 DEBUG oslo_concurrency.lockutils [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:29:54 localhost nova_compute[228635]: 2025-11-23 09:29:54.123 228639 DEBUG oslo_concurrency.lockutils [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:29:54 localhost nova_compute[228635]: 2025-11-23 09:29:54.123 228639 DEBUG oslo_concurrency.lockutils [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:29:54 localhost nova_compute[228635]: 2025-11-23 09:29:54.124 228639 DEBUG nova.compute.resource_tracker [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:29:54 localhost nova_compute[228635]: 2025-11-23 09:29:54.124 228639 DEBUG oslo_concurrency.processutils [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:29:54 localhost nova_compute[228635]: 2025-11-23 09:29:54.595 228639 DEBUG oslo_concurrency.processutils [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:29:54 localhost systemd[1]: Started libvirt nodedev daemon. Nov 23 04:29:54 localhost python3.9[229229]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:29:54 localhost nova_compute[228635]: 2025-11-23 09:29:54.978 228639 WARNING nova.virt.libvirt.driver [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:29:54 localhost nova_compute[228635]: 2025-11-23 09:29:54.979 228639 DEBUG nova.compute.resource_tracker [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=13598MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:29:54 localhost nova_compute[228635]: 2025-11-23 09:29:54.979 228639 DEBUG oslo_concurrency.lockutils [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:29:54 localhost nova_compute[228635]: 2025-11-23 09:29:54.979 228639 DEBUG oslo_concurrency.lockutils [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.111 228639 DEBUG nova.compute.resource_tracker [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.111 228639 DEBUG nova.compute.resource_tracker [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.186 228639 DEBUG nova.scheduler.client.report [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Refreshing inventories for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.202 228639 DEBUG nova.scheduler.client.report [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Updating ProviderTree inventory for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.202 228639 DEBUG nova.compute.provider_tree [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Updating inventory in ProviderTree for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.216 228639 DEBUG nova.scheduler.client.report [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Refreshing aggregate associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.236 228639 DEBUG nova.scheduler.client.report [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Refreshing trait associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, traits: COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_F16C,HW_CPU_X86_SHA,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,HW_CPU_X86_AVX2,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_LAN9118,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.262 228639 DEBUG oslo_concurrency.processutils [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.734 228639 DEBUG oslo_concurrency.processutils [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.741 228639 DEBUG nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Nov 23 04:29:55 localhost nova_compute[228635]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.742 228639 INFO nova.virt.libvirt.host [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] kernel doesn't support AMD SEV#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.743 228639 DEBUG nova.compute.provider_tree [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.744 228639 DEBUG nova.virt.libvirt.driver [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.760 228639 DEBUG nova.scheduler.client.report [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.873 228639 DEBUG nova.compute.provider_tree [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Updating resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052 generation from 2 to 3 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.893 228639 DEBUG nova.compute.resource_tracker [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.894 228639 DEBUG oslo_concurrency.lockutils [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.894 228639 DEBUG nova.service [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.918 228639 DEBUG nova.service [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Nov 23 04:29:55 localhost nova_compute[228635]: 2025-11-23 09:29:55.919 228639 DEBUG nova.servicegroup.drivers.db [None req-21ca4439-8f59-42c1-956e-e7c3216b3d7c - - - - - -] DB_Driver: join new ServiceGroup member np0005532584.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Nov 23 04:29:55 localhost python3.9[229488]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Nov 23 04:29:56 localhost systemd-journald[47422]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 120.1 (400 of 333 items), suggesting rotation. Nov 23 04:29:56 localhost systemd-journald[47422]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 23 04:29:56 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 04:29:56 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 04:29:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60208 DF PROTO=TCP SPT=52964 DPT=9102 SEQ=2369766371 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7A37500000000001030307) Nov 23 04:29:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:29:57 localhost systemd[1]: tmp-crun.HdjGHV.mount: Deactivated successfully. Nov 23 04:29:57 localhost podman[229533]: 2025-11-23 09:29:57.905784723 +0000 UTC m=+0.093421770 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Nov 23 04:29:57 localhost podman[229533]: 2025-11-23 09:29:57.923154725 +0000 UTC m=+0.110791772 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 04:29:57 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:29:58 localhost python3.9[229645]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:29:58 localhost systemd[1]: Stopping nova_compute container... Nov 23 04:29:58 localhost systemd[1]: libpod-e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2.scope: Deactivated successfully. Nov 23 04:29:58 localhost journal[229251]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, ) Nov 23 04:29:58 localhost journal[229251]: hostname: np0005532584.localdomain Nov 23 04:29:58 localhost journal[229251]: End of file while reading data: Input/output error Nov 23 04:29:58 localhost systemd[1]: libpod-e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2.scope: Consumed 3.398s CPU time. Nov 23 04:29:58 localhost podman[229649]: 2025-11-23 09:29:58.764825543 +0000 UTC m=+0.085056196 container died e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 23 04:29:58 localhost podman[229649]: 2025-11-23 09:29:58.840675745 +0000 UTC m=+0.160906398 container cleanup e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:29:58 localhost podman[229649]: nova_compute Nov 23 04:29:58 localhost systemd[1]: tmp-crun.vtboPN.mount: Deactivated successfully. Nov 23 04:29:58 localhost systemd[1]: var-lib-containers-storage-overlay-bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457-merged.mount: Deactivated successfully. Nov 23 04:29:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2-userdata-shm.mount: Deactivated successfully. Nov 23 04:29:58 localhost podman[229691]: error opening file `/run/crun/e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2/status`: No such file or directory Nov 23 04:29:58 localhost podman[229679]: 2025-11-23 09:29:58.954574002 +0000 UTC m=+0.073543333 container cleanup e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Nov 23 04:29:58 localhost podman[229679]: nova_compute Nov 23 04:29:58 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Nov 23 04:29:58 localhost systemd[1]: Stopped nova_compute container. Nov 23 04:29:58 localhost systemd[1]: Starting nova_compute container... Nov 23 04:29:59 localhost systemd[1]: Started libcrun container. Nov 23 04:29:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Nov 23 04:29:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Nov 23 04:29:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Nov 23 04:29:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 04:29:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 23 04:29:59 localhost podman[229693]: 2025-11-23 09:29:59.092840324 +0000 UTC m=+0.105050347 container init e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS) Nov 23 04:29:59 localhost podman[229693]: 2025-11-23 09:29:59.102947844 +0000 UTC m=+0.115157877 container start e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute) Nov 23 04:29:59 localhost podman[229693]: nova_compute Nov 23 04:29:59 localhost nova_compute[229707]: + sudo -E kolla_set_configs Nov 23 04:29:59 localhost systemd[1]: Started nova_compute container. Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Validating config file Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Copying service configuration files Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Deleting /etc/nova/nova.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /etc/nova/nova.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Deleting /etc/ceph Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Creating directory /etc/ceph Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /etc/ceph Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Deleting /usr/sbin/iscsiadm Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Writing out command to execute Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Nov 23 04:29:59 localhost nova_compute[229707]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Nov 23 04:29:59 localhost nova_compute[229707]: ++ cat /run_command Nov 23 04:29:59 localhost nova_compute[229707]: + CMD=nova-compute Nov 23 04:29:59 localhost nova_compute[229707]: + ARGS= Nov 23 04:29:59 localhost nova_compute[229707]: + sudo kolla_copy_cacerts Nov 23 04:29:59 localhost nova_compute[229707]: + [[ ! -n '' ]] Nov 23 04:29:59 localhost nova_compute[229707]: + . kolla_extend_start Nov 23 04:29:59 localhost nova_compute[229707]: Running command: 'nova-compute' Nov 23 04:29:59 localhost nova_compute[229707]: + echo 'Running command: '\''nova-compute'\''' Nov 23 04:29:59 localhost nova_compute[229707]: + umask 0022 Nov 23 04:29:59 localhost nova_compute[229707]: + exec nova-compute Nov 23 04:29:59 localhost systemd[1]: tmp-crun.eTGjRL.mount: Deactivated successfully. Nov 23 04:30:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39055 DF PROTO=TCP SPT=45604 DPT=9100 SEQ=610399048 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7A45CE0000000001030307) Nov 23 04:30:00 localhost nova_compute[229707]: 2025-11-23 09:30:00.809 229711 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 23 04:30:00 localhost nova_compute[229707]: 2025-11-23 09:30:00.809 229711 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 23 04:30:00 localhost nova_compute[229707]: 2025-11-23 09:30:00.810 229711 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 23 04:30:00 localhost nova_compute[229707]: 2025-11-23 09:30:00.810 229711 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Nov 23 04:30:00 localhost nova_compute[229707]: 2025-11-23 09:30:00.919 229711 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:30:00 localhost nova_compute[229707]: 2025-11-23 09:30:00.938 229711 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:30:00 localhost nova_compute[229707]: 2025-11-23 09:30:00.938 229711 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Nov 23 04:30:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39056 DF PROTO=TCP SPT=45604 DPT=9100 SEQ=610399048 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7A49CF0000000001030307) Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.380 229711 INFO nova.virt.driver [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.499 229711 INFO nova.compute.provider_config [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.512 229711 WARNING nova.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.512 229711 DEBUG oslo_concurrency.lockutils [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.512 229711 DEBUG oslo_concurrency.lockutils [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.512 229711 DEBUG oslo_concurrency.lockutils [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.513 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.513 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.513 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.513 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.513 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.514 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.514 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.514 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.514 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.514 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.514 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.515 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.515 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.515 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.515 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.515 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.516 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.516 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.516 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.516 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] console_host = np0005532584.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.516 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.517 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.517 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.517 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.517 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.517 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.517 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.518 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.518 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.518 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.518 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.518 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.519 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.519 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.519 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.519 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.519 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.520 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.520 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] host = np0005532584.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.520 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.520 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.520 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.521 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.521 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.521 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.521 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.521 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.522 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.522 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.522 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.522 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.522 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.523 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.523 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.523 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.523 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.523 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.523 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.524 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.524 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.524 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.524 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.524 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.524 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.525 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.525 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.525 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.525 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.525 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.526 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.526 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.526 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.526 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.526 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.526 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.527 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.527 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.527 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.527 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.527 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.528 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] my_block_storage_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.528 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] my_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.528 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.528 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.528 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.529 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.529 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.529 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.529 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.529 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.530 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.530 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.530 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.530 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.530 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.530 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.531 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.531 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.531 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.531 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.531 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.532 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.532 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.532 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.532 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.532 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.532 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.533 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.533 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.533 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.533 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.533 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.534 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.534 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.534 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.534 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.534 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.534 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.535 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.535 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.535 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.535 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.535 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.536 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.536 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.536 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.536 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.536 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.536 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.537 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.537 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.537 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.537 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.537 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.538 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.538 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.538 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.538 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.538 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.538 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.539 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.539 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.539 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.539 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.539 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.540 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.540 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.540 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.540 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.540 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.540 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.541 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.541 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.541 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.541 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.541 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.542 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.542 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.542 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.542 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.542 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.543 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.543 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.543 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.543 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.543 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.543 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.544 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.544 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.544 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.544 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.544 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.545 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.545 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.545 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.545 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.545 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.546 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.546 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.546 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.546 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.546 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.547 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.547 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.547 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.547 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.547 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.547 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.548 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.548 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.548 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.548 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.548 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.549 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.549 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.549 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.549 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.549 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.549 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.550 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.550 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.550 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.550 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.550 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.551 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.551 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.551 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.551 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.551 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.551 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.552 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.552 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.552 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.552 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.552 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.553 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.553 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.553 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.553 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.553 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.554 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.554 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.554 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.554 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.554 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.554 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.555 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.555 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.555 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.555 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.555 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.556 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.556 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.556 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.556 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.556 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.556 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.557 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.557 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.557 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.557 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.557 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.558 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.558 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.558 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.558 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.558 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.559 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.559 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.559 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.559 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.559 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.559 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.560 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.560 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.560 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.560 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.560 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.560 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.561 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.561 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.561 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.561 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.561 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.562 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.562 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.562 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.562 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.562 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.562 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.563 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.563 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.563 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.563 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.563 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.563 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.563 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.564 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.564 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.564 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.564 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.564 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.564 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.564 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.564 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.565 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.565 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.565 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.565 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.565 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.565 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.565 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.566 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.566 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.566 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.566 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.566 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.566 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.566 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.566 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.567 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.567 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.567 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.567 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.567 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.567 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.567 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.567 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.568 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.568 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.568 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.568 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.568 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.568 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.568 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.569 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.569 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.569 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.569 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.569 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.569 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.569 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.569 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.570 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.570 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.570 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.570 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.570 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.570 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.570 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.571 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.571 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.571 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.571 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.571 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.571 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.571 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.571 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.572 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.572 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.572 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.572 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.572 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.572 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.572 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.572 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.573 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.573 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.573 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.573 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.573 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.573 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.573 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.573 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.574 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.574 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.574 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.574 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.574 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.574 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.575 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.575 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.575 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.575 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.575 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.575 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.575 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.576 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.576 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.576 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.576 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.576 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.576 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.576 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.576 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.577 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.577 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.577 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.577 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.577 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.577 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.577 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.577 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.578 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.578 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.578 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.578 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.578 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.578 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.578 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.578 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.579 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.579 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.579 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.579 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.579 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.579 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.579 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.580 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.580 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.580 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.580 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.580 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.580 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.580 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.580 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.581 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.581 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.581 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.581 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.581 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.581 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.581 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.581 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.582 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.582 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.582 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.582 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.582 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.582 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.582 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.582 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.583 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.583 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.583 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.583 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.583 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.583 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.583 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.583 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.584 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.584 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.584 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.584 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.584 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.584 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.584 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.585 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.585 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.585 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.585 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.585 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.585 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.585 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.585 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.586 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.586 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.586 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.586 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.586 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.586 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.586 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.586 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.587 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.587 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.587 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.587 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.587 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.587 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.587 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.587 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.588 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.588 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.588 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.588 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.588 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.588 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.588 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.589 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.589 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.589 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.589 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.589 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.589 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.589 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.589 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.590 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.590 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.590 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.590 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.590 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.590 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.590 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.590 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.591 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.591 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.591 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.591 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.591 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.591 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.591 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.592 229711 WARNING oslo_config.cfg [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Nov 23 04:30:01 localhost nova_compute[229707]: live_migration_uri is deprecated for removal in favor of two other options that Nov 23 04:30:01 localhost nova_compute[229707]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Nov 23 04:30:01 localhost nova_compute[229707]: and ``live_migration_inbound_addr`` respectively. Nov 23 04:30:01 localhost nova_compute[229707]: ). Its value may be silently ignored in the future.#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.592 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.592 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.592 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.592 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.592 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.592 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.593 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.593 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.593 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.593 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.593 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.593 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.593 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.593 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.594 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.594 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.594 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.594 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.594 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.rbd_secret_uuid = 46550e70-79cb-5f55-bf6d-1204b97e083b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.594 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.594 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.595 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.595 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.595 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.595 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.595 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.595 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.595 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.595 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.596 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.596 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.596 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.596 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.596 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.596 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.596 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.597 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.597 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.597 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.597 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.597 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.597 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.597 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.597 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.598 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.598 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.598 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.598 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.598 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.598 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.598 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.599 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.599 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.599 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.599 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.599 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.599 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.599 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.599 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.600 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.600 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.600 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.600 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.600 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.600 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.600 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.600 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.601 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.601 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.601 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.601 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.601 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.601 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.601 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.601 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.602 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.602 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.602 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.602 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.602 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.602 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.602 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.603 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.603 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.603 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.603 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.603 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.603 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.603 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.603 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.604 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.604 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.604 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.604 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.604 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.604 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.604 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.604 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.605 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.605 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.605 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.605 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.605 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.605 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.605 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.606 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.606 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.606 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.606 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.606 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.606 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.606 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.606 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.607 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.607 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.607 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.607 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.607 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.607 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.607 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.607 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.608 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.608 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.608 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.608 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.608 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.608 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.608 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.608 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.609 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.609 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.609 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.609 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.609 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.609 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.609 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.610 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.610 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.610 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.610 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.610 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.610 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.610 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.611 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.611 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.611 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.611 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.611 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.611 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.611 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.611 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.612 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.612 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.612 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.612 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.612 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.612 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.612 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.613 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.613 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.613 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.613 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.613 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.613 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.613 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.613 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.614 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.614 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.614 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.614 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.614 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.614 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.614 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.615 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.615 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.615 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.615 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.615 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.615 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.615 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.615 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.616 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.616 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.616 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.616 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.616 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.616 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.616 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.617 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.617 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.617 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.617 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.617 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.617 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.617 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.617 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.618 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.618 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.618 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.618 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.618 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.618 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.618 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.619 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.619 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.619 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.619 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.619 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.619 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.619 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.619 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.620 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.620 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.620 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.620 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.620 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.620 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.620 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.620 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.621 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.621 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.621 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.621 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.621 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.621 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.621 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.621 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.622 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.622 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.622 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.622 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.622 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.622 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.622 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.623 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.623 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.623 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.623 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.623 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.623 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.623 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.623 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.624 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.624 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.624 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.624 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.624 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.624 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.624 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.625 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.625 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.625 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.625 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vnc.server_proxyclient_address = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.625 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.625 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.625 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.625 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.626 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.626 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.626 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.626 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.626 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.626 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.626 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.627 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.627 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.627 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.627 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.627 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.627 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.627 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.627 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.628 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.628 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.628 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.628 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.628 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.628 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.628 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.628 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.629 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.629 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.629 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.629 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.629 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.629 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.629 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.629 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.630 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.630 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.630 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.630 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.630 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.630 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.630 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.631 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.631 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.631 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.631 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.631 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.631 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.640 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.640 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.641 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.641 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.641 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.642 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.642 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.642 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.642 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.643 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.643 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.643 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.644 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.644 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.644 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.644 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.645 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.645 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.645 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.646 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.646 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.646 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.646 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.647 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.647 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.647 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.648 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.648 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.648 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.648 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.649 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.649 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.649 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.649 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.650 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.650 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.650 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.651 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.651 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.651 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.652 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.652 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.652 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.652 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.653 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.653 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.653 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.653 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.654 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.654 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.654 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.655 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.655 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.655 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.655 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.656 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.656 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.656 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.656 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.657 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.657 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.657 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.658 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.658 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.658 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.658 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.659 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.659 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.659 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.659 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.660 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.660 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.660 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.661 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.661 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.661 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.661 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.662 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.662 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.663 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.663 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.664 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.664 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.664 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.665 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.665 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.665 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.666 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.666 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.666 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.666 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.667 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.667 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.667 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.668 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.668 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.668 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.669 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.669 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.669 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.669 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.670 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.670 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.670 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.671 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.671 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.671 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.672 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.672 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.672 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.672 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.673 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.673 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.673 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.674 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.674 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.674 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.675 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.675 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.675 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.675 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.676 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.676 229711 DEBUG oslo_service.service [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.678 229711 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.691 229711 INFO nova.virt.node [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Determined node identity c90c5769-42ab-40e9-92fc-3d82b4e96052 from /var/lib/nova/compute_id#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.692 229711 DEBUG nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.693 229711 DEBUG nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.694 229711 DEBUG nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.694 229711 DEBUG nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.705 229711 DEBUG nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.708 229711 DEBUG nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.709 229711 INFO nova.virt.libvirt.driver [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Connection event '1' reason 'None'#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.716 229711 INFO nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Libvirt host capabilities Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: df69e9ed-ec8d-43d9-8710-8ff360287019 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: x86_64 Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v4 Nov 23 04:30:01 localhost nova_compute[229707]: AMD Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: tcp Nov 23 04:30:01 localhost nova_compute[229707]: rdma Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: 16116612 Nov 23 04:30:01 localhost nova_compute[229707]: 4029153 Nov 23 04:30:01 localhost nova_compute[229707]: 0 Nov 23 04:30:01 localhost nova_compute[229707]: 0 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: selinux Nov 23 04:30:01 localhost nova_compute[229707]: 0 Nov 23 04:30:01 localhost nova_compute[229707]: system_u:system_r:svirt_t:s0 Nov 23 04:30:01 localhost nova_compute[229707]: system_u:system_r:svirt_tcg_t:s0 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: dac Nov 23 04:30:01 localhost nova_compute[229707]: 0 Nov 23 04:30:01 localhost nova_compute[229707]: +107:+107 Nov 23 04:30:01 localhost nova_compute[229707]: +107:+107 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: hvm Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: 32 Nov 23 04:30:01 localhost nova_compute[229707]: /usr/libexec/qemu-kvm Nov 23 04:30:01 localhost nova_compute[229707]: pc-i440fx-rhel7.6.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel9.8.0 Nov 23 04:30:01 localhost nova_compute[229707]: q35 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel9.6.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel8.6.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel9.4.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel8.5.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel8.3.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel7.6.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel8.4.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel9.2.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel8.2.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel9.0.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel8.0.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel8.1.0 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: hvm Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: 64 Nov 23 04:30:01 localhost nova_compute[229707]: /usr/libexec/qemu-kvm Nov 23 04:30:01 localhost nova_compute[229707]: pc-i440fx-rhel7.6.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel9.8.0 Nov 23 04:30:01 localhost nova_compute[229707]: q35 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel9.6.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel8.6.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel9.4.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel8.5.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel8.3.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel7.6.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel8.4.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel9.2.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel8.2.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel9.0.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel8.0.0 Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel8.1.0 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: #033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.719 229711 DEBUG nova.virt.libvirt.volume.mount [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.724 229711 DEBUG nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.728 229711 DEBUG nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: /usr/libexec/qemu-kvm Nov 23 04:30:01 localhost nova_compute[229707]: kvm Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel9.8.0 Nov 23 04:30:01 localhost nova_compute[229707]: i686 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: rom Nov 23 04:30:01 localhost nova_compute[229707]: pflash Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: yes Nov 23 04:30:01 localhost nova_compute[229707]: no Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: no Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: on Nov 23 04:30:01 localhost nova_compute[229707]: off Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: on Nov 23 04:30:01 localhost nova_compute[229707]: off Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome Nov 23 04:30:01 localhost nova_compute[229707]: AMD Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: 486 Nov 23 04:30:01 localhost nova_compute[229707]: 486-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-noTSX Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-noTSX-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-noTSX Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v5 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Conroe Nov 23 04:30:01 localhost nova_compute[229707]: Conroe-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Cooperlake Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cooperlake-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cooperlake-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Denverton Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Denverton-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Denverton-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Denverton-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Dhyana Nov 23 04:30:01 localhost nova_compute[229707]: Dhyana-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Dhyana-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Genoa Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Genoa-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-IBPB Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Milan Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Milan-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Milan-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v4 Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-v1 Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-v2 Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: GraniteRapids Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: GraniteRapids-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: GraniteRapids-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-noTSX Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-noTSX-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-noTSX Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v5 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost python3.9[229833]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v6 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v7 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: IvyBridge Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: IvyBridge-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: IvyBridge-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: IvyBridge-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: KnightsMill Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: KnightsMill-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nehalem Nov 23 04:30:01 localhost nova_compute[229707]: Nehalem-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nehalem-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nehalem-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G1 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G1-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G2 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G2-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G3 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G3-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G4-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G5 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G5-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Penryn Nov 23 04:30:01 localhost nova_compute[229707]: Penryn-v1 Nov 23 04:30:01 localhost nova_compute[229707]: SandyBridge Nov 23 04:30:01 localhost nova_compute[229707]: SandyBridge-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: SandyBridge-v1 Nov 23 04:30:01 localhost nova_compute[229707]: SandyBridge-v2 Nov 23 04:30:01 localhost nova_compute[229707]: SapphireRapids Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SapphireRapids-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SapphireRapids-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SapphireRapids-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SierraForest Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SierraForest-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-noTSX-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-noTSX-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v5 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Westmere Nov 23 04:30:01 localhost nova_compute[229707]: Westmere-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Westmere-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Westmere-v2 Nov 23 04:30:01 localhost nova_compute[229707]: athlon Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: athlon-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: core2duo Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: core2duo-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: coreduo Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: coreduo-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: kvm32 Nov 23 04:30:01 localhost nova_compute[229707]: kvm32-v1 Nov 23 04:30:01 localhost nova_compute[229707]: kvm64 Nov 23 04:30:01 localhost nova_compute[229707]: kvm64-v1 Nov 23 04:30:01 localhost nova_compute[229707]: n270 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: n270-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: pentium Nov 23 04:30:01 localhost nova_compute[229707]: pentium-v1 Nov 23 04:30:01 localhost nova_compute[229707]: pentium2 Nov 23 04:30:01 localhost nova_compute[229707]: pentium2-v1 Nov 23 04:30:01 localhost nova_compute[229707]: pentium3 Nov 23 04:30:01 localhost nova_compute[229707]: pentium3-v1 Nov 23 04:30:01 localhost nova_compute[229707]: phenom Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: phenom-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: qemu32 Nov 23 04:30:01 localhost nova_compute[229707]: qemu32-v1 Nov 23 04:30:01 localhost nova_compute[229707]: qemu64 Nov 23 04:30:01 localhost nova_compute[229707]: qemu64-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: file Nov 23 04:30:01 localhost nova_compute[229707]: anonymous Nov 23 04:30:01 localhost nova_compute[229707]: memfd Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: disk Nov 23 04:30:01 localhost nova_compute[229707]: cdrom Nov 23 04:30:01 localhost nova_compute[229707]: floppy Nov 23 04:30:01 localhost nova_compute[229707]: lun Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: fdc Nov 23 04:30:01 localhost nova_compute[229707]: scsi Nov 23 04:30:01 localhost nova_compute[229707]: virtio Nov 23 04:30:01 localhost nova_compute[229707]: usb Nov 23 04:30:01 localhost nova_compute[229707]: sata Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: virtio Nov 23 04:30:01 localhost nova_compute[229707]: virtio-transitional Nov 23 04:30:01 localhost nova_compute[229707]: virtio-non-transitional Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: vnc Nov 23 04:30:01 localhost nova_compute[229707]: egl-headless Nov 23 04:30:01 localhost nova_compute[229707]: dbus Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: subsystem Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: default Nov 23 04:30:01 localhost nova_compute[229707]: mandatory Nov 23 04:30:01 localhost nova_compute[229707]: requisite Nov 23 04:30:01 localhost nova_compute[229707]: optional Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: usb Nov 23 04:30:01 localhost nova_compute[229707]: pci Nov 23 04:30:01 localhost nova_compute[229707]: scsi Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: virtio Nov 23 04:30:01 localhost nova_compute[229707]: virtio-transitional Nov 23 04:30:01 localhost nova_compute[229707]: virtio-non-transitional Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: random Nov 23 04:30:01 localhost nova_compute[229707]: egd Nov 23 04:30:01 localhost nova_compute[229707]: builtin Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: path Nov 23 04:30:01 localhost nova_compute[229707]: handle Nov 23 04:30:01 localhost nova_compute[229707]: virtiofs Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: tpm-tis Nov 23 04:30:01 localhost nova_compute[229707]: tpm-crb Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: emulator Nov 23 04:30:01 localhost nova_compute[229707]: external Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: 2.0 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: usb Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: pty Nov 23 04:30:01 localhost nova_compute[229707]: unix Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: qemu Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: builtin Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: default Nov 23 04:30:01 localhost nova_compute[229707]: passt Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: isa Nov 23 04:30:01 localhost nova_compute[229707]: hyperv Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: null Nov 23 04:30:01 localhost nova_compute[229707]: vc Nov 23 04:30:01 localhost nova_compute[229707]: pty Nov 23 04:30:01 localhost nova_compute[229707]: dev Nov 23 04:30:01 localhost nova_compute[229707]: file Nov 23 04:30:01 localhost nova_compute[229707]: pipe Nov 23 04:30:01 localhost nova_compute[229707]: stdio Nov 23 04:30:01 localhost nova_compute[229707]: udp Nov 23 04:30:01 localhost nova_compute[229707]: tcp Nov 23 04:30:01 localhost nova_compute[229707]: unix Nov 23 04:30:01 localhost nova_compute[229707]: qemu-vdagent Nov 23 04:30:01 localhost nova_compute[229707]: dbus Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: relaxed Nov 23 04:30:01 localhost nova_compute[229707]: vapic Nov 23 04:30:01 localhost nova_compute[229707]: spinlocks Nov 23 04:30:01 localhost nova_compute[229707]: vpindex Nov 23 04:30:01 localhost nova_compute[229707]: runtime Nov 23 04:30:01 localhost nova_compute[229707]: synic Nov 23 04:30:01 localhost nova_compute[229707]: stimer Nov 23 04:30:01 localhost nova_compute[229707]: reset Nov 23 04:30:01 localhost nova_compute[229707]: vendor_id Nov 23 04:30:01 localhost nova_compute[229707]: frequencies Nov 23 04:30:01 localhost nova_compute[229707]: reenlightenment Nov 23 04:30:01 localhost nova_compute[229707]: tlbflush Nov 23 04:30:01 localhost nova_compute[229707]: ipi Nov 23 04:30:01 localhost nova_compute[229707]: avic Nov 23 04:30:01 localhost nova_compute[229707]: emsr_bitmap Nov 23 04:30:01 localhost nova_compute[229707]: xmm_input Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: 4095 Nov 23 04:30:01 localhost nova_compute[229707]: on Nov 23 04:30:01 localhost nova_compute[229707]: off Nov 23 04:30:01 localhost nova_compute[229707]: off Nov 23 04:30:01 localhost nova_compute[229707]: Linux KVM Hv Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: tdx Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.736 229711 DEBUG nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: /usr/libexec/qemu-kvm Nov 23 04:30:01 localhost nova_compute[229707]: kvm Nov 23 04:30:01 localhost nova_compute[229707]: pc-i440fx-rhel7.6.0 Nov 23 04:30:01 localhost nova_compute[229707]: i686 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: rom Nov 23 04:30:01 localhost nova_compute[229707]: pflash Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: yes Nov 23 04:30:01 localhost nova_compute[229707]: no Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: no Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: on Nov 23 04:30:01 localhost nova_compute[229707]: off Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: on Nov 23 04:30:01 localhost nova_compute[229707]: off Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome Nov 23 04:30:01 localhost nova_compute[229707]: AMD Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: 486 Nov 23 04:30:01 localhost nova_compute[229707]: 486-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-noTSX Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-noTSX-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-noTSX Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v5 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Conroe Nov 23 04:30:01 localhost nova_compute[229707]: Conroe-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Cooperlake Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cooperlake-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cooperlake-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Denverton Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Denverton-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Denverton-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Denverton-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Dhyana Nov 23 04:30:01 localhost nova_compute[229707]: Dhyana-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Dhyana-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Genoa Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Genoa-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-IBPB Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Milan Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Milan-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Milan-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v4 Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-v1 Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-v2 Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: GraniteRapids Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: GraniteRapids-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: GraniteRapids-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-noTSX Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-noTSX-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-noTSX Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v5 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v6 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v7 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: IvyBridge Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: IvyBridge-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: IvyBridge-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: IvyBridge-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: KnightsMill Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: KnightsMill-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nehalem Nov 23 04:30:01 localhost nova_compute[229707]: Nehalem-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nehalem-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nehalem-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G1 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G1-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G2 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G2-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G3 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G3-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G4-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G5 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G5-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Penryn Nov 23 04:30:01 localhost nova_compute[229707]: Penryn-v1 Nov 23 04:30:01 localhost nova_compute[229707]: SandyBridge Nov 23 04:30:01 localhost nova_compute[229707]: SandyBridge-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: SandyBridge-v1 Nov 23 04:30:01 localhost nova_compute[229707]: SandyBridge-v2 Nov 23 04:30:01 localhost nova_compute[229707]: SapphireRapids Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SapphireRapids-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SapphireRapids-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SapphireRapids-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SierraForest Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SierraForest-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-noTSX-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-noTSX-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v5 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Westmere Nov 23 04:30:01 localhost nova_compute[229707]: Westmere-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Westmere-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Westmere-v2 Nov 23 04:30:01 localhost nova_compute[229707]: athlon Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: athlon-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: core2duo Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: core2duo-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: coreduo Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: coreduo-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: kvm32 Nov 23 04:30:01 localhost nova_compute[229707]: kvm32-v1 Nov 23 04:30:01 localhost nova_compute[229707]: kvm64 Nov 23 04:30:01 localhost nova_compute[229707]: kvm64-v1 Nov 23 04:30:01 localhost nova_compute[229707]: n270 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: n270-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: pentium Nov 23 04:30:01 localhost nova_compute[229707]: pentium-v1 Nov 23 04:30:01 localhost nova_compute[229707]: pentium2 Nov 23 04:30:01 localhost nova_compute[229707]: pentium2-v1 Nov 23 04:30:01 localhost nova_compute[229707]: pentium3 Nov 23 04:30:01 localhost nova_compute[229707]: pentium3-v1 Nov 23 04:30:01 localhost nova_compute[229707]: phenom Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: phenom-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: qemu32 Nov 23 04:30:01 localhost nova_compute[229707]: qemu32-v1 Nov 23 04:30:01 localhost nova_compute[229707]: qemu64 Nov 23 04:30:01 localhost nova_compute[229707]: qemu64-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: file Nov 23 04:30:01 localhost nova_compute[229707]: anonymous Nov 23 04:30:01 localhost nova_compute[229707]: memfd Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: disk Nov 23 04:30:01 localhost nova_compute[229707]: cdrom Nov 23 04:30:01 localhost nova_compute[229707]: floppy Nov 23 04:30:01 localhost nova_compute[229707]: lun Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: ide Nov 23 04:30:01 localhost nova_compute[229707]: fdc Nov 23 04:30:01 localhost nova_compute[229707]: scsi Nov 23 04:30:01 localhost nova_compute[229707]: virtio Nov 23 04:30:01 localhost nova_compute[229707]: usb Nov 23 04:30:01 localhost nova_compute[229707]: sata Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: virtio Nov 23 04:30:01 localhost nova_compute[229707]: virtio-transitional Nov 23 04:30:01 localhost nova_compute[229707]: virtio-non-transitional Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: vnc Nov 23 04:30:01 localhost nova_compute[229707]: egl-headless Nov 23 04:30:01 localhost nova_compute[229707]: dbus Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: subsystem Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: default Nov 23 04:30:01 localhost nova_compute[229707]: mandatory Nov 23 04:30:01 localhost nova_compute[229707]: requisite Nov 23 04:30:01 localhost nova_compute[229707]: optional Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: usb Nov 23 04:30:01 localhost nova_compute[229707]: pci Nov 23 04:30:01 localhost nova_compute[229707]: scsi Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: virtio Nov 23 04:30:01 localhost nova_compute[229707]: virtio-transitional Nov 23 04:30:01 localhost nova_compute[229707]: virtio-non-transitional Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: random Nov 23 04:30:01 localhost nova_compute[229707]: egd Nov 23 04:30:01 localhost nova_compute[229707]: builtin Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: path Nov 23 04:30:01 localhost nova_compute[229707]: handle Nov 23 04:30:01 localhost nova_compute[229707]: virtiofs Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: tpm-tis Nov 23 04:30:01 localhost nova_compute[229707]: tpm-crb Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: emulator Nov 23 04:30:01 localhost nova_compute[229707]: external Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: 2.0 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: usb Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: pty Nov 23 04:30:01 localhost nova_compute[229707]: unix Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: qemu Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: builtin Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: default Nov 23 04:30:01 localhost nova_compute[229707]: passt Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: isa Nov 23 04:30:01 localhost nova_compute[229707]: hyperv Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: null Nov 23 04:30:01 localhost nova_compute[229707]: vc Nov 23 04:30:01 localhost nova_compute[229707]: pty Nov 23 04:30:01 localhost nova_compute[229707]: dev Nov 23 04:30:01 localhost nova_compute[229707]: file Nov 23 04:30:01 localhost nova_compute[229707]: pipe Nov 23 04:30:01 localhost nova_compute[229707]: stdio Nov 23 04:30:01 localhost nova_compute[229707]: udp Nov 23 04:30:01 localhost nova_compute[229707]: tcp Nov 23 04:30:01 localhost nova_compute[229707]: unix Nov 23 04:30:01 localhost nova_compute[229707]: qemu-vdagent Nov 23 04:30:01 localhost nova_compute[229707]: dbus Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: relaxed Nov 23 04:30:01 localhost nova_compute[229707]: vapic Nov 23 04:30:01 localhost nova_compute[229707]: spinlocks Nov 23 04:30:01 localhost nova_compute[229707]: vpindex Nov 23 04:30:01 localhost nova_compute[229707]: runtime Nov 23 04:30:01 localhost nova_compute[229707]: synic Nov 23 04:30:01 localhost nova_compute[229707]: stimer Nov 23 04:30:01 localhost nova_compute[229707]: reset Nov 23 04:30:01 localhost nova_compute[229707]: vendor_id Nov 23 04:30:01 localhost nova_compute[229707]: frequencies Nov 23 04:30:01 localhost nova_compute[229707]: reenlightenment Nov 23 04:30:01 localhost nova_compute[229707]: tlbflush Nov 23 04:30:01 localhost nova_compute[229707]: ipi Nov 23 04:30:01 localhost nova_compute[229707]: avic Nov 23 04:30:01 localhost nova_compute[229707]: emsr_bitmap Nov 23 04:30:01 localhost nova_compute[229707]: xmm_input Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: 4095 Nov 23 04:30:01 localhost nova_compute[229707]: on Nov 23 04:30:01 localhost nova_compute[229707]: off Nov 23 04:30:01 localhost nova_compute[229707]: off Nov 23 04:30:01 localhost nova_compute[229707]: Linux KVM Hv Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: tdx Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.766 229711 DEBUG nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.772 229711 DEBUG nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: /usr/libexec/qemu-kvm Nov 23 04:30:01 localhost nova_compute[229707]: kvm Nov 23 04:30:01 localhost nova_compute[229707]: pc-q35-rhel9.8.0 Nov 23 04:30:01 localhost nova_compute[229707]: x86_64 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: efi Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Nov 23 04:30:01 localhost nova_compute[229707]: /usr/share/edk2/ovmf/OVMF_CODE.fd Nov 23 04:30:01 localhost nova_compute[229707]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Nov 23 04:30:01 localhost nova_compute[229707]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: rom Nov 23 04:30:01 localhost nova_compute[229707]: pflash Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: yes Nov 23 04:30:01 localhost nova_compute[229707]: no Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: yes Nov 23 04:30:01 localhost nova_compute[229707]: no Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: on Nov 23 04:30:01 localhost nova_compute[229707]: off Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: on Nov 23 04:30:01 localhost nova_compute[229707]: off Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome Nov 23 04:30:01 localhost nova_compute[229707]: AMD Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: 486 Nov 23 04:30:01 localhost nova_compute[229707]: 486-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-noTSX Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-noTSX-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-noTSX Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v5 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Conroe Nov 23 04:30:01 localhost nova_compute[229707]: Conroe-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Cooperlake Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cooperlake-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cooperlake-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Denverton Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Denverton-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Denverton-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Denverton-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Dhyana Nov 23 04:30:01 localhost nova_compute[229707]: Dhyana-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Dhyana-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Genoa Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Genoa-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-IBPB Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Milan Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Milan-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Milan-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v4 Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-v1 Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-v2 Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: GraniteRapids Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: GraniteRapids-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: GraniteRapids-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-noTSX Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-noTSX-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-noTSX Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v5 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v6 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v7 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: IvyBridge Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: IvyBridge-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: IvyBridge-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: IvyBridge-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: KnightsMill Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: KnightsMill-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nehalem Nov 23 04:30:01 localhost nova_compute[229707]: Nehalem-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nehalem-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nehalem-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G1 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G1-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G2 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G2-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G3 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G3-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G4-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G5 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G5-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Penryn Nov 23 04:30:01 localhost nova_compute[229707]: Penryn-v1 Nov 23 04:30:01 localhost nova_compute[229707]: SandyBridge Nov 23 04:30:01 localhost nova_compute[229707]: SandyBridge-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: SandyBridge-v1 Nov 23 04:30:01 localhost nova_compute[229707]: SandyBridge-v2 Nov 23 04:30:01 localhost nova_compute[229707]: SapphireRapids Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SapphireRapids-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SapphireRapids-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SapphireRapids-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SierraForest Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SierraForest-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-noTSX-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-noTSX-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v5 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Westmere Nov 23 04:30:01 localhost nova_compute[229707]: Westmere-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Westmere-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Westmere-v2 Nov 23 04:30:01 localhost nova_compute[229707]: athlon Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: athlon-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: core2duo Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: core2duo-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: coreduo Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: coreduo-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: kvm32 Nov 23 04:30:01 localhost nova_compute[229707]: kvm32-v1 Nov 23 04:30:01 localhost nova_compute[229707]: kvm64 Nov 23 04:30:01 localhost nova_compute[229707]: kvm64-v1 Nov 23 04:30:01 localhost nova_compute[229707]: n270 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: n270-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: pentium Nov 23 04:30:01 localhost nova_compute[229707]: pentium-v1 Nov 23 04:30:01 localhost nova_compute[229707]: pentium2 Nov 23 04:30:01 localhost nova_compute[229707]: pentium2-v1 Nov 23 04:30:01 localhost nova_compute[229707]: pentium3 Nov 23 04:30:01 localhost nova_compute[229707]: pentium3-v1 Nov 23 04:30:01 localhost nova_compute[229707]: phenom Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: phenom-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: qemu32 Nov 23 04:30:01 localhost nova_compute[229707]: qemu32-v1 Nov 23 04:30:01 localhost nova_compute[229707]: qemu64 Nov 23 04:30:01 localhost nova_compute[229707]: qemu64-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: file Nov 23 04:30:01 localhost nova_compute[229707]: anonymous Nov 23 04:30:01 localhost nova_compute[229707]: memfd Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: disk Nov 23 04:30:01 localhost nova_compute[229707]: cdrom Nov 23 04:30:01 localhost nova_compute[229707]: floppy Nov 23 04:30:01 localhost nova_compute[229707]: lun Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: fdc Nov 23 04:30:01 localhost nova_compute[229707]: scsi Nov 23 04:30:01 localhost nova_compute[229707]: virtio Nov 23 04:30:01 localhost nova_compute[229707]: usb Nov 23 04:30:01 localhost nova_compute[229707]: sata Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: virtio Nov 23 04:30:01 localhost nova_compute[229707]: virtio-transitional Nov 23 04:30:01 localhost nova_compute[229707]: virtio-non-transitional Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: vnc Nov 23 04:30:01 localhost nova_compute[229707]: egl-headless Nov 23 04:30:01 localhost nova_compute[229707]: dbus Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: subsystem Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: default Nov 23 04:30:01 localhost nova_compute[229707]: mandatory Nov 23 04:30:01 localhost nova_compute[229707]: requisite Nov 23 04:30:01 localhost nova_compute[229707]: optional Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: usb Nov 23 04:30:01 localhost nova_compute[229707]: pci Nov 23 04:30:01 localhost nova_compute[229707]: scsi Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: virtio Nov 23 04:30:01 localhost nova_compute[229707]: virtio-transitional Nov 23 04:30:01 localhost nova_compute[229707]: virtio-non-transitional Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: random Nov 23 04:30:01 localhost nova_compute[229707]: egd Nov 23 04:30:01 localhost nova_compute[229707]: builtin Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: path Nov 23 04:30:01 localhost nova_compute[229707]: handle Nov 23 04:30:01 localhost nova_compute[229707]: virtiofs Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: tpm-tis Nov 23 04:30:01 localhost nova_compute[229707]: tpm-crb Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: emulator Nov 23 04:30:01 localhost nova_compute[229707]: external Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: 2.0 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: usb Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: pty Nov 23 04:30:01 localhost nova_compute[229707]: unix Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: qemu Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: builtin Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: default Nov 23 04:30:01 localhost nova_compute[229707]: passt Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: isa Nov 23 04:30:01 localhost nova_compute[229707]: hyperv Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: null Nov 23 04:30:01 localhost nova_compute[229707]: vc Nov 23 04:30:01 localhost nova_compute[229707]: pty Nov 23 04:30:01 localhost nova_compute[229707]: dev Nov 23 04:30:01 localhost nova_compute[229707]: file Nov 23 04:30:01 localhost nova_compute[229707]: pipe Nov 23 04:30:01 localhost nova_compute[229707]: stdio Nov 23 04:30:01 localhost nova_compute[229707]: udp Nov 23 04:30:01 localhost nova_compute[229707]: tcp Nov 23 04:30:01 localhost nova_compute[229707]: unix Nov 23 04:30:01 localhost nova_compute[229707]: qemu-vdagent Nov 23 04:30:01 localhost nova_compute[229707]: dbus Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: relaxed Nov 23 04:30:01 localhost nova_compute[229707]: vapic Nov 23 04:30:01 localhost nova_compute[229707]: spinlocks Nov 23 04:30:01 localhost nova_compute[229707]: vpindex Nov 23 04:30:01 localhost nova_compute[229707]: runtime Nov 23 04:30:01 localhost nova_compute[229707]: synic Nov 23 04:30:01 localhost nova_compute[229707]: stimer Nov 23 04:30:01 localhost nova_compute[229707]: reset Nov 23 04:30:01 localhost nova_compute[229707]: vendor_id Nov 23 04:30:01 localhost nova_compute[229707]: frequencies Nov 23 04:30:01 localhost nova_compute[229707]: reenlightenment Nov 23 04:30:01 localhost nova_compute[229707]: tlbflush Nov 23 04:30:01 localhost nova_compute[229707]: ipi Nov 23 04:30:01 localhost nova_compute[229707]: avic Nov 23 04:30:01 localhost nova_compute[229707]: emsr_bitmap Nov 23 04:30:01 localhost nova_compute[229707]: xmm_input Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: 4095 Nov 23 04:30:01 localhost nova_compute[229707]: on Nov 23 04:30:01 localhost nova_compute[229707]: off Nov 23 04:30:01 localhost nova_compute[229707]: off Nov 23 04:30:01 localhost nova_compute[229707]: Linux KVM Hv Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: tdx Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.820 229711 DEBUG nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: /usr/libexec/qemu-kvm Nov 23 04:30:01 localhost nova_compute[229707]: kvm Nov 23 04:30:01 localhost nova_compute[229707]: pc-i440fx-rhel7.6.0 Nov 23 04:30:01 localhost nova_compute[229707]: x86_64 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: rom Nov 23 04:30:01 localhost nova_compute[229707]: pflash Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: yes Nov 23 04:30:01 localhost nova_compute[229707]: no Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: no Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: on Nov 23 04:30:01 localhost nova_compute[229707]: off Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: on Nov 23 04:30:01 localhost nova_compute[229707]: off Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome Nov 23 04:30:01 localhost nova_compute[229707]: AMD Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: 486 Nov 23 04:30:01 localhost nova_compute[229707]: 486-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-noTSX Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-noTSX-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Broadwell-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-noTSX Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cascadelake-Server-v5 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Conroe Nov 23 04:30:01 localhost nova_compute[229707]: Conroe-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Cooperlake Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cooperlake-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Cooperlake-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Denverton Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Denverton-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Denverton-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Denverton-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Dhyana Nov 23 04:30:01 localhost nova_compute[229707]: Dhyana-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Dhyana-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Genoa Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Genoa-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-IBPB Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Milan Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Milan-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Milan-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-Rome-v4 Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-v1 Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-v2 Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: EPYC-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: GraniteRapids Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: GraniteRapids-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: GraniteRapids-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-noTSX Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-noTSX-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Haswell-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-noTSX Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v5 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v6 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Icelake-Server-v7 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: IvyBridge Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: IvyBridge-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: IvyBridge-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: IvyBridge-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: KnightsMill Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: KnightsMill-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nehalem Nov 23 04:30:01 localhost nova_compute[229707]: Nehalem-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nehalem-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nehalem-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G1 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G1-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G2 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G2-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G3 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G3-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G4-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G5 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Opteron_G5-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Penryn Nov 23 04:30:01 localhost nova_compute[229707]: Penryn-v1 Nov 23 04:30:01 localhost nova_compute[229707]: SandyBridge Nov 23 04:30:01 localhost nova_compute[229707]: SandyBridge-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: SandyBridge-v1 Nov 23 04:30:01 localhost nova_compute[229707]: SandyBridge-v2 Nov 23 04:30:01 localhost nova_compute[229707]: SapphireRapids Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SapphireRapids-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SapphireRapids-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SapphireRapids-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SierraForest Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: SierraForest-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-noTSX-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Client-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-noTSX-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Skylake-Server-v5 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge-v2 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge-v3 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Snowridge-v4 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Westmere Nov 23 04:30:01 localhost nova_compute[229707]: Westmere-IBRS Nov 23 04:30:01 localhost nova_compute[229707]: Westmere-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Westmere-v2 Nov 23 04:30:01 localhost nova_compute[229707]: athlon Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: athlon-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: core2duo Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: core2duo-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: coreduo Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: coreduo-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: kvm32 Nov 23 04:30:01 localhost nova_compute[229707]: kvm32-v1 Nov 23 04:30:01 localhost nova_compute[229707]: kvm64 Nov 23 04:30:01 localhost nova_compute[229707]: kvm64-v1 Nov 23 04:30:01 localhost nova_compute[229707]: n270 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: n270-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: pentium Nov 23 04:30:01 localhost nova_compute[229707]: pentium-v1 Nov 23 04:30:01 localhost nova_compute[229707]: pentium2 Nov 23 04:30:01 localhost nova_compute[229707]: pentium2-v1 Nov 23 04:30:01 localhost nova_compute[229707]: pentium3 Nov 23 04:30:01 localhost nova_compute[229707]: pentium3-v1 Nov 23 04:30:01 localhost nova_compute[229707]: phenom Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: phenom-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: qemu32 Nov 23 04:30:01 localhost nova_compute[229707]: qemu32-v1 Nov 23 04:30:01 localhost nova_compute[229707]: qemu64 Nov 23 04:30:01 localhost nova_compute[229707]: qemu64-v1 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: file Nov 23 04:30:01 localhost nova_compute[229707]: anonymous Nov 23 04:30:01 localhost nova_compute[229707]: memfd Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: disk Nov 23 04:30:01 localhost nova_compute[229707]: cdrom Nov 23 04:30:01 localhost nova_compute[229707]: floppy Nov 23 04:30:01 localhost nova_compute[229707]: lun Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: ide Nov 23 04:30:01 localhost nova_compute[229707]: fdc Nov 23 04:30:01 localhost nova_compute[229707]: scsi Nov 23 04:30:01 localhost nova_compute[229707]: virtio Nov 23 04:30:01 localhost nova_compute[229707]: usb Nov 23 04:30:01 localhost nova_compute[229707]: sata Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: virtio Nov 23 04:30:01 localhost nova_compute[229707]: virtio-transitional Nov 23 04:30:01 localhost nova_compute[229707]: virtio-non-transitional Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: vnc Nov 23 04:30:01 localhost nova_compute[229707]: egl-headless Nov 23 04:30:01 localhost nova_compute[229707]: dbus Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: subsystem Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: default Nov 23 04:30:01 localhost nova_compute[229707]: mandatory Nov 23 04:30:01 localhost nova_compute[229707]: requisite Nov 23 04:30:01 localhost nova_compute[229707]: optional Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: usb Nov 23 04:30:01 localhost nova_compute[229707]: pci Nov 23 04:30:01 localhost nova_compute[229707]: scsi Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: virtio Nov 23 04:30:01 localhost nova_compute[229707]: virtio-transitional Nov 23 04:30:01 localhost nova_compute[229707]: virtio-non-transitional Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: random Nov 23 04:30:01 localhost nova_compute[229707]: egd Nov 23 04:30:01 localhost nova_compute[229707]: builtin Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: path Nov 23 04:30:01 localhost nova_compute[229707]: handle Nov 23 04:30:01 localhost nova_compute[229707]: virtiofs Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: tpm-tis Nov 23 04:30:01 localhost nova_compute[229707]: tpm-crb Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: emulator Nov 23 04:30:01 localhost nova_compute[229707]: external Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: 2.0 Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: usb Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: pty Nov 23 04:30:01 localhost nova_compute[229707]: unix Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: qemu Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: builtin Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: default Nov 23 04:30:01 localhost nova_compute[229707]: passt Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: isa Nov 23 04:30:01 localhost nova_compute[229707]: hyperv Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: null Nov 23 04:30:01 localhost nova_compute[229707]: vc Nov 23 04:30:01 localhost nova_compute[229707]: pty Nov 23 04:30:01 localhost nova_compute[229707]: dev Nov 23 04:30:01 localhost nova_compute[229707]: file Nov 23 04:30:01 localhost nova_compute[229707]: pipe Nov 23 04:30:01 localhost nova_compute[229707]: stdio Nov 23 04:30:01 localhost nova_compute[229707]: udp Nov 23 04:30:01 localhost nova_compute[229707]: tcp Nov 23 04:30:01 localhost nova_compute[229707]: unix Nov 23 04:30:01 localhost nova_compute[229707]: qemu-vdagent Nov 23 04:30:01 localhost nova_compute[229707]: dbus Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: relaxed Nov 23 04:30:01 localhost nova_compute[229707]: vapic Nov 23 04:30:01 localhost nova_compute[229707]: spinlocks Nov 23 04:30:01 localhost nova_compute[229707]: vpindex Nov 23 04:30:01 localhost nova_compute[229707]: runtime Nov 23 04:30:01 localhost nova_compute[229707]: synic Nov 23 04:30:01 localhost nova_compute[229707]: stimer Nov 23 04:30:01 localhost nova_compute[229707]: reset Nov 23 04:30:01 localhost nova_compute[229707]: vendor_id Nov 23 04:30:01 localhost nova_compute[229707]: frequencies Nov 23 04:30:01 localhost nova_compute[229707]: reenlightenment Nov 23 04:30:01 localhost nova_compute[229707]: tlbflush Nov 23 04:30:01 localhost nova_compute[229707]: ipi Nov 23 04:30:01 localhost nova_compute[229707]: avic Nov 23 04:30:01 localhost nova_compute[229707]: emsr_bitmap Nov 23 04:30:01 localhost nova_compute[229707]: xmm_input Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: 4095 Nov 23 04:30:01 localhost nova_compute[229707]: on Nov 23 04:30:01 localhost nova_compute[229707]: off Nov 23 04:30:01 localhost nova_compute[229707]: off Nov 23 04:30:01 localhost nova_compute[229707]: Linux KVM Hv Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: tdx Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: Nov 23 04:30:01 localhost nova_compute[229707]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.864 229711 DEBUG nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.865 229711 INFO nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Secure Boot support detected#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.867 229711 INFO nova.virt.libvirt.driver [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.867 229711 INFO nova.virt.libvirt.driver [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.877 229711 DEBUG nova.virt.libvirt.driver [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.892 229711 INFO nova.virt.node [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Determined node identity c90c5769-42ab-40e9-92fc-3d82b4e96052 from /var/lib/nova/compute_id#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.907 229711 DEBUG nova.compute.manager [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Verified node c90c5769-42ab-40e9-92fc-3d82b4e96052 matches my host np0005532584.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Nov 23 04:30:01 localhost nova_compute[229707]: 2025-11-23 09:30:01.925 229711 INFO nova.compute.manager [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Nov 23 04:30:01 localhost systemd[1]: Started libpod-conmon-da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424.scope. Nov 23 04:30:01 localhost systemd[1]: Started libcrun container. Nov 23 04:30:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67ad61ffa011d2a6ffe840fd10b604169240085fe7d81fe96db01b6b54a383bd/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Nov 23 04:30:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67ad61ffa011d2a6ffe840fd10b604169240085fe7d81fe96db01b6b54a383bd/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 23 04:30:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67ad61ffa011d2a6ffe840fd10b604169240085fe7d81fe96db01b6b54a383bd/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Nov 23 04:30:02 localhost podman[229878]: 2025-11-23 09:30:02.013794478 +0000 UTC m=+0.118771028 container init da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute_init, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118) Nov 23 04:30:02 localhost podman[229878]: 2025-11-23 09:30:02.024743902 +0000 UTC m=+0.129720462 container start da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Nov 23 04:30:02 localhost python3.9[229833]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.054 229711 DEBUG oslo_concurrency.lockutils [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.055 229711 DEBUG oslo_concurrency.lockutils [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.055 229711 DEBUG oslo_concurrency.lockutils [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.055 229711 DEBUG nova.compute.resource_tracker [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.056 229711 DEBUG oslo_concurrency.processutils [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Applying nova statedir ownership Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/4143dbbec5b08621aa3c8eb364f8a7d3e97604e18b7ed41c4bab0da11ed561fd Nov 23 04:30:02 localhost nova_compute_init[229898]: INFO:nova_statedir:Nova statedir ownership complete Nov 23 04:30:02 localhost systemd[1]: libpod-da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424.scope: Deactivated successfully. Nov 23 04:30:02 localhost podman[229911]: 2025-11-23 09:30:02.139605348 +0000 UTC m=+0.045367719 container died da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}) Nov 23 04:30:02 localhost podman[229911]: 2025-11-23 09:30:02.169676759 +0000 UTC m=+0.075439090 container cleanup da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:30:02 localhost systemd[1]: libpod-conmon-da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424.scope: Deactivated successfully. Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.527 229711 DEBUG oslo_concurrency.processutils [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.696 229711 WARNING nova.virt.libvirt.driver [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.698 229711 DEBUG nova.compute.resource_tracker [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=13595MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.698 229711 DEBUG oslo_concurrency.lockutils [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.698 229711 DEBUG oslo_concurrency.lockutils [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.800 229711 DEBUG nova.compute.resource_tracker [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.801 229711 DEBUG nova.compute.resource_tracker [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.861 229711 DEBUG nova.scheduler.client.report [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Refreshing inventories for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 23 04:30:02 localhost systemd[1]: session-53.scope: Deactivated successfully. Nov 23 04:30:02 localhost systemd[1]: session-53.scope: Consumed 2min 13.615s CPU time. Nov 23 04:30:02 localhost systemd-logind[760]: Session 53 logged out. Waiting for processes to exit. Nov 23 04:30:02 localhost systemd-logind[760]: Removed session 53. Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.890 229711 DEBUG nova.scheduler.client.report [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Updating ProviderTree inventory for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.891 229711 DEBUG nova.compute.provider_tree [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Updating inventory in ProviderTree for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.907 229711 DEBUG nova.scheduler.client.report [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Refreshing aggregate associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.933 229711 DEBUG nova.scheduler.client.report [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Refreshing trait associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, traits: HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,HW_CPU_X86_AESNI,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 23 04:30:02 localhost systemd[1]: var-lib-containers-storage-overlay-67ad61ffa011d2a6ffe840fd10b604169240085fe7d81fe96db01b6b54a383bd-merged.mount: Deactivated successfully. Nov 23 04:30:02 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424-userdata-shm.mount: Deactivated successfully. Nov 23 04:30:02 localhost nova_compute[229707]: 2025-11-23 09:30:02.948 229711 DEBUG oslo_concurrency.processutils [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:30:03 localhost nova_compute[229707]: 2025-11-23 09:30:03.406 229711 DEBUG oslo_concurrency.processutils [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:30:03 localhost nova_compute[229707]: 2025-11-23 09:30:03.412 229711 DEBUG nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Nov 23 04:30:03 localhost nova_compute[229707]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Nov 23 04:30:03 localhost nova_compute[229707]: 2025-11-23 09:30:03.413 229711 INFO nova.virt.libvirt.host [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] kernel doesn't support AMD SEV#033[00m Nov 23 04:30:03 localhost nova_compute[229707]: 2025-11-23 09:30:03.414 229711 DEBUG nova.compute.provider_tree [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:30:03 localhost nova_compute[229707]: 2025-11-23 09:30:03.415 229711 DEBUG nova.virt.libvirt.driver [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Nov 23 04:30:03 localhost nova_compute[229707]: 2025-11-23 09:30:03.437 229711 DEBUG nova.scheduler.client.report [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:30:03 localhost nova_compute[229707]: 2025-11-23 09:30:03.478 229711 DEBUG nova.compute.resource_tracker [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:30:03 localhost nova_compute[229707]: 2025-11-23 09:30:03.479 229711 DEBUG oslo_concurrency.lockutils [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:30:03 localhost nova_compute[229707]: 2025-11-23 09:30:03.479 229711 DEBUG nova.service [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Nov 23 04:30:03 localhost nova_compute[229707]: 2025-11-23 09:30:03.504 229711 DEBUG nova.service [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Nov 23 04:30:03 localhost nova_compute[229707]: 2025-11-23 09:30:03.505 229711 DEBUG nova.servicegroup.drivers.db [None req-eb88f66d-c499-4be6-8fa8-34deea0302ed - - - - - -] DB_Driver: join new ServiceGroup member np0005532584.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Nov 23 04:30:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45843 DF PROTO=TCP SPT=53726 DPT=9100 SEQ=2810341873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7A560F0000000001030307) Nov 23 04:30:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39058 DF PROTO=TCP SPT=45604 DPT=9100 SEQ=610399048 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7A61900000000001030307) Nov 23 04:30:08 localhost sshd[229998]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:30:08 localhost systemd-logind[760]: New session 55 of user zuul. Nov 23 04:30:08 localhost systemd[1]: Started Session 55 of User zuul. Nov 23 04:30:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:30:09.712 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:30:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:30:09.713 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:30:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:30:09.713 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:30:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3109 DF PROTO=TCP SPT=51182 DPT=9105 SEQ=623584040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7A6B500000000001030307) Nov 23 04:30:09 localhost python3.9[230109]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:30:12 localhost python3.9[230223]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:30:12 localhost systemd[1]: Reloading. Nov 23 04:30:12 localhost systemd-sysv-generator[230249]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:30:12 localhost systemd-rc-local-generator[230244]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:13 localhost python3.9[230366]: ansible-ansible.builtin.service_facts Invoked Nov 23 04:30:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3110 DF PROTO=TCP SPT=51182 DPT=9105 SEQ=623584040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7A7B0F0000000001030307) Nov 23 04:30:13 localhost network[230383]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 23 04:30:13 localhost network[230384]: 'network-scripts' will be removed from distribution in near future. Nov 23 04:30:13 localhost network[230385]: It is advised to switch to 'NetworkManager' instead for network management. Nov 23 04:30:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:30:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7540 DF PROTO=TCP SPT=42338 DPT=9882 SEQ=1441346858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7A87D30000000001030307) Nov 23 04:30:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:30:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:30:17 localhost systemd[1]: tmp-crun.MJa8Xx.mount: Deactivated successfully. Nov 23 04:30:17 localhost podman[230528]: 2025-11-23 09:30:17.945433918 +0000 UTC m=+0.126026171 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2) Nov 23 04:30:17 localhost podman[230529]: 2025-11-23 09:30:17.91409077 +0000 UTC m=+0.094707284 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:30:17 localhost podman[230528]: 2025-11-23 09:30:17.979370078 +0000 UTC m=+0.159962341 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 23 04:30:17 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:30:17 localhost podman[230529]: 2025-11-23 09:30:17.999422449 +0000 UTC m=+0.180038993 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118) Nov 23 04:30:18 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:30:18 localhost python3.9[230663]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:30:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2858 DF PROTO=TCP SPT=60352 DPT=9102 SEQ=3253074721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7A90B50000000001030307) Nov 23 04:30:19 localhost python3.9[230774]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:30:20 localhost systemd-journald[47422]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 76.6 (255 of 333 items), suggesting rotation. Nov 23 04:30:20 localhost systemd-journald[47422]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 23 04:30:20 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 04:30:20 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 04:30:20 localhost nova_compute[229707]: 2025-11-23 09:30:20.508 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:30:20 localhost nova_compute[229707]: 2025-11-23 09:30:20.526 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:30:20 localhost python3.9[230885]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:30:21 localhost python3.9[230995]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:30:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45783 DF PROTO=TCP SPT=37190 DPT=9101 SEQ=2353462339 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7A9C0F0000000001030307) Nov 23 04:30:22 localhost python3.9[231105]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 23 04:30:23 localhost python3.9[231215]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:30:23 localhost systemd[1]: Reloading. Nov 23 04:30:23 localhost systemd-sysv-generator[231241]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:30:23 localhost systemd-rc-local-generator[231237]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:30:23 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:23 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:23 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:23 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:30:23 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:23 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:23 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:23 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29993 DF PROTO=TCP SPT=46312 DPT=9101 SEQ=1910385524 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7AA60F0000000001030307) Nov 23 04:30:25 localhost python3.9[231360]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:30:26 localhost python3.9[231471]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:30:27 localhost python3.9[231579]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:30:28 localhost python3.9[231689]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:30:28 localhost podman[231776]: 2025-11-23 09:30:28.90943595 +0000 UTC m=+0.096903504 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3) Nov 23 04:30:28 localhost podman[231776]: 2025-11-23 09:30:28.923434741 +0000 UTC m=+0.110902345 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 04:30:28 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:30:29 localhost python3.9[231775]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890227.9806037-359-173349780910440/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=ef234178e200688a09f7b547c2a9ca52c8dbd7b5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:30:29 localhost python3.9[231905]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None Nov 23 04:30:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12888 DF PROTO=TCP SPT=57822 DPT=9100 SEQ=3983838359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7ABAFE0000000001030307) Nov 23 04:30:30 localhost python3.9[232015]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None Nov 23 04:30:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12889 DF PROTO=TCP SPT=57822 DPT=9100 SEQ=3983838359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7ABF0F0000000001030307) Nov 23 04:30:31 localhost python3.9[232126]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Nov 23 04:30:32 localhost python3.9[232242]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005532584.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Nov 23 04:30:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6497 DF PROTO=TCP SPT=37314 DPT=9100 SEQ=3214441920 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7ACA0F0000000001030307) Nov 23 04:30:34 localhost python3.9[232358]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:35 localhost python3.9[232444]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1763890233.6304846-563-191831319154796/.source.conf _original_basename=ceilometer.conf follow=False checksum=950edd520595720a58ffe786d84e54d033109e91 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:30:35 localhost python3.9[232552]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:36 localhost python3.9[232638]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1763890235.2985632-563-222512929195968/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:30:36 localhost python3.9[232746]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12891 DF PROTO=TCP SPT=57822 DPT=9100 SEQ=3983838359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7AD6D00000000001030307) Nov 23 04:30:37 localhost python3.9[232832]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1763890236.3922718-563-271281509368616/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:30:38 localhost python3.9[232940]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:30:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17578 DF PROTO=TCP SPT=38260 DPT=9105 SEQ=2119295268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7AE08F0000000001030307) Nov 23 04:30:39 localhost python3.9[233048]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:30:40 localhost python3.9[233156]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:41 localhost python3.9[233242]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890240.1253788-740-134467218780357/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:30:41 localhost python3.9[233350]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:42 localhost python3.9[233405]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:30:42 localhost python3.9[233513]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:43 localhost python3.9[233599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890242.2121327-740-216662622385959/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=d15068604cf730dd6e7b88a19d62f57d3a39f94f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:30:43 localhost python3.9[233707]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17579 DF PROTO=TCP SPT=38260 DPT=9105 SEQ=2119295268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7AF04F0000000001030307) Nov 23 04:30:44 localhost python3.9[233793]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890243.3041615-740-121632600861569/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:30:44 localhost python3.9[233901]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:45 localhost python3.9[233987]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890244.372271-740-168757037954295/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:30:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12892 DF PROTO=TCP SPT=57822 DPT=9100 SEQ=3983838359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7AF80F0000000001030307) Nov 23 04:30:45 localhost python3.9[234131]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:46 localhost python3.9[234237]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890245.4403028-740-194078259363735/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=7e5ab36b7368c1d4a00810e02af11a7f7d7c84e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:30:46 localhost python3.9[234357]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:47 localhost python3.9[234461]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890246.5146937-740-245401996361646/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:30:48 localhost python3.9[234569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:30:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:30:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7545 DF PROTO=TCP SPT=42338 DPT=9882 SEQ=1441346858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7B040F0000000001030307) Nov 23 04:30:48 localhost podman[234627]: 2025-11-23 09:30:48.905968028 +0000 UTC m=+0.082779688 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller) Nov 23 04:30:48 localhost podman[234627]: 2025-11-23 09:30:48.93837848 +0000 UTC m=+0.115190150 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller) Nov 23 04:30:48 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:30:48 localhost podman[234625]: 2025-11-23 09:30:48.955256372 +0000 UTC m=+0.133599801 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 23 04:30:48 localhost podman[234625]: 2025-11-23 09:30:48.985285668 +0000 UTC m=+0.163629097 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118) Nov 23 04:30:48 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:30:49 localhost python3.9[234677]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890247.5707808-740-88246439449696/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=0e4ea521b0035bea70b7a804346a5c89364dcbc3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:30:49 localhost python3.9[234807]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:50 localhost python3.9[234893]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890249.2004616-740-265085870329772/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=b056dcaaba7624b93826bb95ee9e82f81bde6c72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:30:51 localhost python3.9[235001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:51 localhost python3.9[235087]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890250.9473338-740-221649173069178/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=885ccc6f5edd8803cb385bdda5648d0b3017b4e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:30:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17580 DF PROTO=TCP SPT=38260 DPT=9105 SEQ=2119295268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7B100F0000000001030307) Nov 23 04:30:52 localhost python3.9[235195]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:52 localhost python3.9[235281]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890252.0073779-740-171710052990955/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:30:53 localhost python3.9[235391]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:30:54 localhost python3.9[235501]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:30:54 localhost systemd[1]: Reloading. Nov 23 04:30:54 localhost systemd-sysv-generator[235530]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:30:54 localhost systemd-rc-local-generator[235525]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:30:54 localhost systemd[1]: Listening on Podman API Socket. Nov 23 04:30:55 localhost python3.9[235651]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:56 localhost python3.9[235739]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890255.2268705-1256-210661233950826/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 23 04:30:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10884 DF PROTO=TCP SPT=44454 DPT=9102 SEQ=2970623039 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7B21900000000001030307) Nov 23 04:30:56 localhost python3.9[235794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:30:57 localhost python3.9[235882]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890255.2268705-1256-210661233950826/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 23 04:30:58 localhost python3.9[235992]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False Nov 23 04:30:59 localhost python3.9[236102]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 23 04:30:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:30:59 localhost podman[236191]: 2025-11-23 09:30:59.904799849 +0000 UTC m=+0.083152692 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 04:30:59 localhost podman[236191]: 2025-11-23 09:30:59.916764365 +0000 UTC m=+0.095117198 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Nov 23 04:30:59 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:31:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21313 DF PROTO=TCP SPT=34290 DPT=9100 SEQ=708702585 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7B302E0000000001030307) Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.947 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.948 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.948 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.949 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.961 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.961 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.962 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.962 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.963 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.963 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.964 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.964 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.964 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.976 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.977 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.977 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.977 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:31:00 localhost nova_compute[229707]: 2025-11-23 09:31:00.978 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:31:01 localhost python3[236231]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False Nov 23 04:31:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21314 DF PROTO=TCP SPT=34290 DPT=9100 SEQ=708702585 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7B34500000000001030307) Nov 23 04:31:01 localhost python3[236231]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "5b3bac081df6146e06acefa72320d250dc7d5f82abc7fbe0b9e83aec1e1587f5",#012 "Digest": "sha256:9f9f367ed4c85efb16c3a74a4bb707ff0db271d7bc5abc70a71e984b55f43003",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:9f9f367ed4c85efb16c3a74a4bb707ff0db271d7bc5abc70a71e984b55f43003"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-11-21T06:23:50.144134741Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251118",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 505196287,#012 "VirtualSize": 505196287,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/0ef5cc2b89c8a41c643bbf27e239c40ba42d2785cdc67bc5e5d4e7b894568a96/diff:/var/lib/containers/storage/overlay/0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c/diff:/var/lib/containers/storage/overlay/6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068/diff:/var/lib/containers/storage/overlay/ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/3e0acb8482e457c209ba092ee475b7db4ab9004896d495d3f7437d3c12bacbd3/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/3e0acb8482e457c209ba092ee475b7db4ab9004896d495d3f7437d3c12bacbd3/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6",#012 "sha256:573e98f577c8b1610c1485067040ff856a142394fcd22ad4cb9c66b7d1de6bef",#012 "sha256:5a71e5d7d31f15255619cb8b9384b708744757c93993652418b0f45b0c0931d5",#012 "sha256:4ff7b15b3989ce3486d1ee120e82ba5b4acb5e4ad1d931e92c8d8e0851a32a6a",#012 "sha256:847ae301d478780c04ade872e138a0bd4b67a423f03bd51e3a177105d1684cb3"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251118",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-18T01:56:49.795434035Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6d427dd138d2b0977a7ef7feaa8bd82d04e99cc5f4a16d555d6cff0cb52d43c6 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-18T01:56:49.795512415Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251118\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-18T01:56:52.547242013Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-11-21T06:10:01.947310748Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947327778Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947358359Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947372589Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.94738527Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.94739397Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:02.324930938Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:36.349393468Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 Nov 23 04:31:01 localhost podman[236303]: 2025-11-23 09:31:01.386627896 +0000 UTC m=+0.076250854 container remove 131fb75a289b5b981a3e291b5fec2b51bbc2ae56a38832d863dbf9eccb45f182 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'cdd192006d3eee4976a7ad00d48f6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git) Nov 23 04:31:01 localhost python3[236231]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ceilometer_agent_compute Nov 23 04:31:01 localhost nova_compute[229707]: 2025-11-23 09:31:01.434 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:31:01 localhost podman[236317]: Nov 23 04:31:01 localhost podman[236317]: 2025-11-23 09:31:01.515126524 +0000 UTC m=+0.107959202 container create 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118) Nov 23 04:31:01 localhost podman[236317]: 2025-11-23 09:31:01.462830927 +0000 UTC m=+0.055663635 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Nov 23 04:31:01 localhost python3[236231]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified kolla_start Nov 23 04:31:01 localhost nova_compute[229707]: 2025-11-23 09:31:01.634 229711 WARNING nova.virt.libvirt.driver [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:31:01 localhost nova_compute[229707]: 2025-11-23 09:31:01.635 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=13560MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:31:01 localhost nova_compute[229707]: 2025-11-23 09:31:01.635 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:31:01 localhost nova_compute[229707]: 2025-11-23 09:31:01.636 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:31:01 localhost nova_compute[229707]: 2025-11-23 09:31:01.711 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:31:01 localhost nova_compute[229707]: 2025-11-23 09:31:01.711 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:31:01 localhost nova_compute[229707]: 2025-11-23 09:31:01.737 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:31:02 localhost nova_compute[229707]: 2025-11-23 09:31:02.179 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:31:02 localhost nova_compute[229707]: 2025-11-23 09:31:02.185 229711 DEBUG nova.compute.provider_tree [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:31:02 localhost nova_compute[229707]: 2025-11-23 09:31:02.197 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:31:02 localhost nova_compute[229707]: 2025-11-23 09:31:02.198 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:31:02 localhost nova_compute[229707]: 2025-11-23 09:31:02.198 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:31:02 localhost python3.9[236485]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:31:03 localhost python3.9[236599]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:31:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39061 DF PROTO=TCP SPT=45604 DPT=9100 SEQ=610399048 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7B40100000000001030307) Nov 23 04:31:04 localhost python3.9[236708]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890263.684097-1448-113494570997030/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:31:05 localhost python3.9[236763]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:31:05 localhost systemd[1]: Reloading. Nov 23 04:31:05 localhost systemd-rc-local-generator[236787]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:31:05 localhost systemd-sysv-generator[236794]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:05 localhost python3.9[236854]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:31:05 localhost systemd[1]: Reloading. Nov 23 04:31:06 localhost systemd-rc-local-generator[236879]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:31:06 localhost systemd-sysv-generator[236885]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:06 localhost systemd[1]: Starting ceilometer_agent_compute container... Nov 23 04:31:06 localhost systemd[1]: Started libcrun container. Nov 23 04:31:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8edeb11e2b69e7424af3fef5da1d1fde6bc3545b3ad1c998bac171dcd2318af/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Nov 23 04:31:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8edeb11e2b69e7424af3fef5da1d1fde6bc3545b3ad1c998bac171dcd2318af/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Nov 23 04:31:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:31:06 localhost podman[236895]: 2025-11-23 09:31:06.471172716 +0000 UTC m=+0.139293560 container init 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: + sudo -E kolla_set_configs Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: sudo: unable to send audit message: Operation not permitted Nov 23 04:31:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:31:06 localhost podman[236895]: 2025-11-23 09:31:06.507450379 +0000 UTC m=+0.175571173 container start 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:31:06 localhost podman[236895]: ceilometer_agent_compute Nov 23 04:31:06 localhost systemd[1]: Started ceilometer_agent_compute container. Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: INFO:__main__:Validating config file Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: INFO:__main__:Copying service configuration files Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: INFO:__main__:Writing out command to execute Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: ++ cat /run_command Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: + ARGS= Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: + sudo kolla_copy_cacerts Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: sudo: unable to send audit message: Operation not permitted Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: + [[ ! -n '' ]] Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: + . kolla_extend_start Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: + umask 0022 Nov 23 04:31:06 localhost ceilometer_agent_compute[236909]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Nov 23 04:31:06 localhost podman[236918]: 2025-11-23 09:31:06.598248449 +0000 UTC m=+0.085069331 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:31:06 localhost podman[236918]: 2025-11-23 09:31:06.627974236 +0000 UTC m=+0.114795118 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:31:06 localhost podman[236918]: unhealthy Nov 23 04:31:06 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:31:06 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Failed with result 'exit-code'. Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.296 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.297 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.297 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.297 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.297 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.297 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.297 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.297 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.297 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.297 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.297 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.298 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.298 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.298 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.298 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.298 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005532584.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.298 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.298 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.298 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.298 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.298 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.298 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.299 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.299 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.299 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.299 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.299 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.299 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.299 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.299 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.299 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.299 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.299 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.299 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.299 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.300 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.300 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.300 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.300 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.300 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.300 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.300 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.300 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.300 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.300 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.300 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.300 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.300 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.301 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.301 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.301 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.301 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.301 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.301 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.301 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.301 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.301 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.301 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.301 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.301 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.301 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.302 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.302 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.302 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.302 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.302 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.302 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.302 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.302 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.302 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.302 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.302 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.302 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.303 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.303 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.303 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.303 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.303 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.303 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.303 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.303 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.303 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.303 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.303 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.303 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.304 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.304 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.304 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.304 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.304 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.304 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.304 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.304 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.304 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.304 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.304 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.304 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.304 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.305 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.305 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.305 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.305 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.305 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.305 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.305 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.305 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.305 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.305 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.305 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.306 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.306 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.306 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.306 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.306 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.306 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.306 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.306 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.306 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.306 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.306 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.306 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.307 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.307 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.307 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.307 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.307 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.307 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.307 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.307 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.307 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.307 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.307 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.307 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.308 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.308 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.308 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.308 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.308 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.308 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.308 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.308 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.308 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.308 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.308 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.308 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.309 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.309 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.309 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.309 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.309 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.309 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.309 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.309 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.309 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.309 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.309 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.309 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.310 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.310 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.310 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.310 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.310 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.310 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.310 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.310 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Nov 23 04:31:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21316 DF PROTO=TCP SPT=34290 DPT=9100 SEQ=708702585 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7B4C0F0000000001030307) Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.326 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.328 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.329 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.418 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.475 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.475 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.475 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.475 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.476 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.476 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.476 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.476 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.476 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.476 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.476 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.476 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.476 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.476 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.477 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.477 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.477 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005532584.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.477 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.477 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.477 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.477 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.477 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.477 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.477 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.477 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.477 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.478 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.478 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.478 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.478 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.478 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.478 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.478 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.478 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.478 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.478 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.478 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.478 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.478 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.478 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.479 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.479 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.479 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.479 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.479 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.479 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.479 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.479 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.479 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.479 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.479 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.479 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.480 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.480 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.480 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.480 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.480 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.480 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.480 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.480 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.480 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.480 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.480 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.480 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.481 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.481 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.481 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.481 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.481 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.481 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.481 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.481 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.481 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.481 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.481 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.481 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.481 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.482 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.482 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.482 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.482 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.482 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.482 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.482 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.482 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.482 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.482 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.482 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.482 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.482 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.483 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.483 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.483 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.483 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.483 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.483 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.483 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.483 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.483 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.483 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.483 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.483 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.484 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.484 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.484 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.484 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.484 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.484 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.484 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.484 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.484 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.484 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.484 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.484 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.485 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.485 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.485 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.485 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.485 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.485 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.485 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.485 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.485 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.485 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.485 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.485 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.486 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.486 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.486 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.486 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.486 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.486 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.486 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.486 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.486 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.486 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.486 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.486 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.486 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.487 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.487 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.487 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.487 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.487 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.487 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.487 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.487 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.487 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.487 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.487 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.487 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.487 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.488 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.488 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.488 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.488 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.488 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.488 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.488 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.488 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.488 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.488 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.488 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.488 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.488 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.489 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.489 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.489 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.489 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.489 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.489 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.489 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.489 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.489 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.489 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.489 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.489 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.489 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.490 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.490 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.490 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.490 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.490 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.490 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.490 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.490 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.490 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.490 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.490 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.490 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.490 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.491 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.491 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.491 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.491 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.491 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.491 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.491 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.491 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.491 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.491 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.491 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.491 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.491 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.492 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.492 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.492 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.492 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.492 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.492 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.492 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.492 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.492 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.492 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.492 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.492 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.493 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.493 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.493 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.494 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.499 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.502 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.502 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.502 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.502 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.502 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.503 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.503 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.503 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.503 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.503 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.503 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.503 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.504 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.504 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.504 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.504 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.504 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.504 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.504 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.505 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.505 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.505 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.505 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.505 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:07 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:07.505 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:08 localhost python3.9[237055]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:31:09 localhost systemd[1]: Stopping ceilometer_agent_compute container... Nov 23 04:31:09 localhost systemd[1]: tmp-crun.XmFCvW.mount: Deactivated successfully. Nov 23 04:31:09 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:09.527 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process Nov 23 04:31:09 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:09.628 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304 Nov 23 04:31:09 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:09.628 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308 Nov 23 04:31:09 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:09.629 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12] Nov 23 04:31:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:31:09.712 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:31:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:31:09.714 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:31:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:31:09.714 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:31:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65286 DF PROTO=TCP SPT=50004 DPT=9105 SEQ=3584298735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7B55D00000000001030307) Nov 23 04:31:11 localhost ceilometer_agent_compute[236909]: 2025-11-23 09:31:11.075 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320 Nov 23 04:31:11 localhost journal[229251]: End of file while reading data: Input/output error Nov 23 04:31:11 localhost journal[229251]: End of file while reading data: Input/output error Nov 23 04:31:11 localhost systemd[1]: libpod-2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.scope: Deactivated successfully. Nov 23 04:31:11 localhost systemd[1]: libpod-2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.scope: Consumed 1.154s CPU time. Nov 23 04:31:11 localhost podman[237059]: 2025-11-23 09:31:11.202635369 +0000 UTC m=+1.744051271 container died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:31:11 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.timer: Deactivated successfully. Nov 23 04:31:11 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:31:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05-userdata-shm.mount: Deactivated successfully. Nov 23 04:31:11 localhost systemd[1]: var-lib-containers-storage-overlay-f8edeb11e2b69e7424af3fef5da1d1fde6bc3545b3ad1c998bac171dcd2318af-merged.mount: Deactivated successfully. Nov 23 04:31:11 localhost podman[237059]: 2025-11-23 09:31:11.256976851 +0000 UTC m=+1.798392703 container cleanup 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:31:11 localhost podman[237059]: ceilometer_agent_compute Nov 23 04:31:11 localhost podman[237086]: 2025-11-23 09:31:11.34202619 +0000 UTC m=+0.057696588 container cleanup 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Nov 23 04:31:11 localhost podman[237086]: ceilometer_agent_compute Nov 23 04:31:11 localhost systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully. Nov 23 04:31:11 localhost systemd[1]: Stopped ceilometer_agent_compute container. Nov 23 04:31:11 localhost systemd[1]: Starting ceilometer_agent_compute container... Nov 23 04:31:11 localhost systemd[1]: Started libcrun container. Nov 23 04:31:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8edeb11e2b69e7424af3fef5da1d1fde6bc3545b3ad1c998bac171dcd2318af/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Nov 23 04:31:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8edeb11e2b69e7424af3fef5da1d1fde6bc3545b3ad1c998bac171dcd2318af/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Nov 23 04:31:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:31:11 localhost podman[237097]: 2025-11-23 09:31:11.500726331 +0000 UTC m=+0.125954790 container init 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: + sudo -E kolla_set_configs Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: sudo: unable to send audit message: Operation not permitted Nov 23 04:31:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:31:11 localhost podman[237097]: 2025-11-23 09:31:11.532174151 +0000 UTC m=+0.157402570 container start 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute) Nov 23 04:31:11 localhost podman[237097]: ceilometer_agent_compute Nov 23 04:31:11 localhost systemd[1]: Started ceilometer_agent_compute container. Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Validating config file Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Copying service configuration files Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: INFO:__main__:Writing out command to execute Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: ++ cat /run_command Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: + ARGS= Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: + sudo kolla_copy_cacerts Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: sudo: unable to send audit message: Operation not permitted Nov 23 04:31:11 localhost podman[237120]: 2025-11-23 09:31:11.625300335 +0000 UTC m=+0.086148564 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: + [[ ! -n '' ]] Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: + . kolla_extend_start Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: + umask 0022 Nov 23 04:31:11 localhost ceilometer_agent_compute[237112]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Nov 23 04:31:11 localhost podman[237120]: 2025-11-23 09:31:11.659359919 +0000 UTC m=+0.120208158 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm) Nov 23 04:31:11 localhost podman[237120]: unhealthy Nov 23 04:31:11 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:31:11 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Failed with result 'exit-code'. Nov 23 04:31:12 localhost python3.9[237252]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.371 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.371 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.371 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.371 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.371 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.371 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.371 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.371 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.371 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.371 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.372 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.372 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.372 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.372 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.372 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.372 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005532584.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.372 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.372 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.372 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.372 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.372 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.373 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.373 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.373 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.373 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.373 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.373 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.373 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.373 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.373 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.373 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.373 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.373 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.373 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.374 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.374 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.374 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.374 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.374 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.374 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.374 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.374 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.374 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.374 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.374 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.374 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.375 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.375 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.375 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.375 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.375 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.375 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.375 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.375 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.375 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.375 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.375 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.375 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.375 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.376 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.376 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.376 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.376 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.376 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.376 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.376 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.376 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.376 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.376 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.376 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.376 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.377 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.377 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.377 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.377 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.377 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.377 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.377 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.377 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.377 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.377 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.377 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.377 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.378 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.378 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.378 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.378 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.378 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.378 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.378 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.378 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.378 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.378 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.378 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.378 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.379 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.379 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.379 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.379 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.379 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.379 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.379 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.379 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.379 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.379 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.379 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.380 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.380 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.380 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.380 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.380 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.380 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.380 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.380 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.380 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.380 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.380 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.380 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.381 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.381 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.381 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.381 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.381 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.381 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.381 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.381 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.381 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.381 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.382 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.382 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.382 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.382 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.382 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.382 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.382 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.382 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.382 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.382 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.382 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.382 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.383 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.383 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.383 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.383 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.383 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.383 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.383 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.383 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.383 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.383 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.383 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.383 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.383 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.384 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.384 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.384 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.384 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.384 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.384 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.384 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.384 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.384 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.384 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.384 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.384 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.401 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.403 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.404 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.420 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.541 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.541 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.542 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.542 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.542 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.542 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.542 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.542 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.542 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.542 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.542 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.542 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.542 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.542 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.543 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.543 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.543 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005532584.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.543 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.543 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.543 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.543 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.543 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.543 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.543 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.543 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.543 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.544 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.544 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.544 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.544 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.544 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.544 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.544 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.544 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.544 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.544 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.544 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.544 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.544 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.545 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.545 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.545 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.545 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.545 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.545 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.545 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.545 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.545 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.545 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.545 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.545 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.546 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.546 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.546 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.546 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.546 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.546 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.546 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.546 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.546 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.546 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.546 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.546 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.546 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.547 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.547 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.547 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.547 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.547 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.547 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.547 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.547 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.547 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.547 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.547 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.547 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.549 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.549 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.549 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.549 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.549 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.549 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.549 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.549 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.549 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.549 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.549 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.549 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.549 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.550 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.550 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.550 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.550 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.550 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.550 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.550 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.550 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.550 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.550 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.550 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.551 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.551 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.551 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.551 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.551 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.551 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.551 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.551 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.551 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.551 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.551 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.551 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.551 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.552 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.552 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.552 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.552 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.552 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.552 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.552 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.552 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.552 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.552 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.552 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.552 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.552 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.553 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.553 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.553 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.553 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.553 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.553 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.553 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.553 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.553 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.553 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.553 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.553 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.554 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.554 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.554 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.554 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.554 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.554 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.555 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.555 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.555 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.555 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.555 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.555 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.555 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.555 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.555 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.555 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.556 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.556 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.556 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.556 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.556 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.556 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.556 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.556 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.556 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.556 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.556 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.556 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.556 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.557 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.557 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.557 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.557 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.557 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.557 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.557 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.557 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.557 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.557 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.557 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.557 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.557 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.558 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.558 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.558 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.558 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.558 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.558 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.558 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.558 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.558 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.558 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.558 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.558 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.559 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.559 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.559 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.559 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.559 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.559 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.559 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.559 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.559 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.559 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.559 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.559 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.559 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.560 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.560 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.563 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.570 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:31:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:31:12 localhost python3.9[237343]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890271.7402365-1544-95232209209489/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 23 04:31:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65287 DF PROTO=TCP SPT=50004 DPT=9105 SEQ=3584298735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7B658F0000000001030307) Nov 23 04:31:14 localhost python3.9[237456]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False Nov 23 04:31:14 localhost python3.9[237566]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 23 04:31:16 localhost python3[237676]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False Nov 23 04:31:16 localhost podman[237715]: Nov 23 04:31:16 localhost podman[237715]: 2025-11-23 09:31:16.877645677 +0000 UTC m=+0.084847356 container create 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors , config_id=edpm) Nov 23 04:31:16 localhost podman[237715]: 2025-11-23 09:31:16.839774616 +0000 UTC m=+0.046976295 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Nov 23 04:31:16 localhost python3[237676]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl Nov 23 04:31:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13978 DF PROTO=TCP SPT=33094 DPT=9882 SEQ=2994283448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7B72330000000001030307) Nov 23 04:31:18 localhost python3.9[237862]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:31:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8132 DF PROTO=TCP SPT=56120 DPT=9882 SEQ=815301546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7B7A0F0000000001030307) Nov 23 04:31:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:31:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:31:19 localhost podman[237976]: 2025-11-23 09:31:19.488489451 +0000 UTC m=+0.090983462 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:31:19 localhost podman[237975]: 2025-11-23 09:31:19.562658004 +0000 UTC m=+0.166022812 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:31:19 localhost podman[237976]: 2025-11-23 09:31:19.568279684 +0000 UTC m=+0.170773755 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 23 04:31:19 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:31:19 localhost python3.9[237974]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:31:19 localhost podman[237975]: 2025-11-23 09:31:19.596258149 +0000 UTC m=+0.199622867 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:31:19 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:31:20 localhost python3.9[238125]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890279.6672752-1703-270748962599822/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:31:20 localhost python3.9[238180]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:31:20 localhost systemd[1]: Reloading. Nov 23 04:31:20 localhost systemd-sysv-generator[238205]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:31:20 localhost systemd-rc-local-generator[238201]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:21 localhost python3.9[238271]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:31:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65288 DF PROTO=TCP SPT=50004 DPT=9105 SEQ=3584298735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7B860F0000000001030307) Nov 23 04:31:22 localhost systemd[1]: Reloading. Nov 23 04:31:22 localhost systemd-rc-local-generator[238296]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:31:22 localhost systemd-sysv-generator[238301]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:23 localhost systemd[1]: Starting node_exporter container... Nov 23 04:31:23 localhost systemd[1]: Started libcrun container. Nov 23 04:31:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:31:23 localhost podman[238311]: 2025-11-23 09:31:23.37124309 +0000 UTC m=+0.178306155 container init 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.392Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)" Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.392Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)" Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.392Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required." Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.393Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.393Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice) Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.393Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.393Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:110 level=info msg="Enabled collectors" Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=arp Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=bcache Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=bonding Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=btrfs Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=conntrack Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=cpu Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=cpufreq Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=diskstats Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=edac Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=fibrechannel Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=filefd Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=filesystem Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=infiniband Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=ipvs Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=loadavg Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=mdadm Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=meminfo Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=netclass Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=netdev Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=netstat Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=nfs Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=nfsd Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=nvme Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=schedstat Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=sockstat Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=softnet Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=systemd Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=tapestats Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=udp_queues Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=vmstat Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=xfs Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.394Z caller=node_exporter.go:117 level=info collector=zfs Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.395Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100 Nov 23 04:31:23 localhost node_exporter[238325]: ts=2025-11-23T09:31:23.395Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100 Nov 23 04:31:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:31:23 localhost podman[238311]: 2025-11-23 09:31:23.418181362 +0000 UTC m=+0.225244387 container start 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 04:31:23 localhost podman[238311]: node_exporter Nov 23 04:31:23 localhost systemd[1]: Started node_exporter container. Nov 23 04:31:23 localhost podman[238334]: 2025-11-23 09:31:23.525110653 +0000 UTC m=+0.101201539 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 04:31:23 localhost podman[238334]: 2025-11-23 09:31:23.534708111 +0000 UTC m=+0.110798997 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:31:23 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:31:24 localhost python3.9[238466]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:31:24 localhost systemd[1]: tmp-crun.VReGyp.mount: Deactivated successfully. Nov 23 04:31:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24358 DF PROTO=TCP SPT=49290 DPT=9101 SEQ=3340724581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7B900F0000000001030307) Nov 23 04:31:25 localhost systemd[1]: Stopping node_exporter container... Nov 23 04:31:25 localhost systemd[1]: libpod-8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.scope: Deactivated successfully. Nov 23 04:31:25 localhost podman[238470]: 2025-11-23 09:31:25.342394889 +0000 UTC m=+0.077861623 container died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:31:25 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.timer: Deactivated successfully. Nov 23 04:31:25 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:31:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58-userdata-shm.mount: Deactivated successfully. Nov 23 04:31:25 localhost podman[238470]: 2025-11-23 09:31:25.387941926 +0000 UTC m=+0.123408640 container cleanup 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:31:25 localhost podman[238470]: node_exporter Nov 23 04:31:25 localhost systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Nov 23 04:31:25 localhost podman[238497]: 2025-11-23 09:31:25.489409262 +0000 UTC m=+0.066712995 container cleanup 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:31:25 localhost podman[238497]: node_exporter Nov 23 04:31:25 localhost systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'. Nov 23 04:31:25 localhost systemd[1]: Stopped node_exporter container. Nov 23 04:31:25 localhost systemd[1]: Starting node_exporter container... Nov 23 04:31:25 localhost systemd[1]: Started libcrun container. Nov 23 04:31:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:31:25 localhost podman[238509]: 2025-11-23 09:31:25.686821769 +0000 UTC m=+0.123877755 container init 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.697Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)" Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.697Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)" Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.697Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required." Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.697Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.697Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.697Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice) Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:110 level=info msg="Enabled collectors" Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=arp Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=bcache Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=bonding Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=btrfs Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=conntrack Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=cpu Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=cpufreq Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=diskstats Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=edac Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=fibrechannel Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=filefd Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=filesystem Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=infiniband Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=ipvs Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=loadavg Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=mdadm Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=meminfo Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=netclass Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=netdev Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=netstat Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=nfs Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=nfsd Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=nvme Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=schedstat Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=sockstat Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=softnet Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=systemd Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=tapestats Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=udp_queues Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=vmstat Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=xfs Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.698Z caller=node_exporter.go:117 level=info collector=zfs Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.699Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100 Nov 23 04:31:25 localhost node_exporter[238523]: ts=2025-11-23T09:31:25.699Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100 Nov 23 04:31:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:31:25 localhost podman[238509]: 2025-11-23 09:31:25.712966485 +0000 UTC m=+0.150022471 container start 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 04:31:25 localhost podman[238509]: node_exporter Nov 23 04:31:25 localhost systemd[1]: Started node_exporter container. Nov 23 04:31:25 localhost podman[238532]: 2025-11-23 09:31:25.79970036 +0000 UTC m=+0.081755737 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:31:25 localhost podman[238532]: 2025-11-23 09:31:25.833387728 +0000 UTC m=+0.115443105 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:31:25 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:31:26 localhost python3.9[238663]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:31:27 localhost python3.9[238751]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890285.8986816-1799-152073280812015/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 23 04:31:28 localhost python3.9[238861]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False Nov 23 04:31:29 localhost python3.9[238971]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 23 04:31:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32018 DF PROTO=TCP SPT=41638 DPT=9100 SEQ=2149339463 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7BA55D0000000001030307) Nov 23 04:31:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:31:30 localhost podman[239082]: 2025-11-23 09:31:30.351093674 +0000 UTC m=+0.089850066 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.vendor=CentOS) Nov 23 04:31:30 localhost podman[239082]: 2025-11-23 09:31:30.362826239 +0000 UTC m=+0.101582571 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true) Nov 23 04:31:30 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:31:30 localhost python3[239081]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False Nov 23 04:31:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32019 DF PROTO=TCP SPT=41638 DPT=9100 SEQ=2149339463 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7BA9500000000001030307) Nov 23 04:31:32 localhost podman[239115]: 2025-11-23 09:31:30.683568712 +0000 UTC m=+0.048273247 image pull quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Nov 23 04:31:32 localhost podman[239186]: Nov 23 04:31:32 localhost podman[239186]: 2025-11-23 09:31:32.643390456 +0000 UTC m=+0.077996326 container create a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi , config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible) Nov 23 04:31:32 localhost podman[239186]: 2025-11-23 09:31:32.604494712 +0000 UTC m=+0.039100612 image pull quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Nov 23 04:31:32 localhost python3[239081]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Nov 23 04:31:33 localhost python3.9[239332]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:31:34 localhost python3.9[239444]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:31:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12894 DF PROTO=TCP SPT=57822 DPT=9100 SEQ=3983838359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7BB60F0000000001030307) Nov 23 04:31:34 localhost python3.9[239553]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890294.349556-1958-150383185570285/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:31:35 localhost python3.9[239608]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:31:35 localhost systemd[1]: Reloading. Nov 23 04:31:35 localhost systemd-rc-local-generator[239627]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:31:35 localhost systemd-sysv-generator[239633]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:36 localhost python3.9[239698]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:31:36 localhost systemd[1]: Reloading. Nov 23 04:31:36 localhost systemd-rc-local-generator[239723]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:31:36 localhost systemd-sysv-generator[239726]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:31:36 localhost systemd[1]: Starting podman_exporter container... Nov 23 04:31:37 localhost systemd[1]: Started libcrun container. Nov 23 04:31:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:31:37 localhost podman[239738]: 2025-11-23 09:31:37.108117106 +0000 UTC m=+0.153323587 container init a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:31:37 localhost podman_exporter[239753]: ts=2025-11-23T09:31:37.134Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)" Nov 23 04:31:37 localhost podman_exporter[239753]: ts=2025-11-23T09:31:37.134Z caller=exporter.go:69 level=info msg=metrics enhanced=false Nov 23 04:31:37 localhost podman_exporter[239753]: ts=2025-11-23T09:31:37.134Z caller=handler.go:94 level=info msg="enabled collectors" Nov 23 04:31:37 localhost podman_exporter[239753]: ts=2025-11-23T09:31:37.134Z caller=handler.go:105 level=info collector=container Nov 23 04:31:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:31:37 localhost systemd[1]: Starting Podman API Service... Nov 23 04:31:37 localhost systemd[1]: Started Podman API Service. Nov 23 04:31:37 localhost podman[239738]: 2025-11-23 09:31:37.155868514 +0000 UTC m=+0.201074955 container start a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:31:37 localhost podman[239738]: podman_exporter Nov 23 04:31:37 localhost systemd[1]: Started podman_exporter container. Nov 23 04:31:37 localhost podman[239764]: time="2025-11-23T09:31:37Z" level=info msg="/usr/bin/podman filtering at log level info" Nov 23 04:31:37 localhost podman[239763]: 2025-11-23 09:31:37.235735789 +0000 UTC m=+0.079637229 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:31:37 localhost podman[239763]: 2025-11-23 09:31:37.243062183 +0000 UTC m=+0.086963603 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:31:37 localhost podman[239763]: unhealthy Nov 23 04:31:37 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:31:37 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Failed with result 'exit-code'. Nov 23 04:31:37 localhost podman[239764]: time="2025-11-23T09:31:37Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Nov 23 04:31:37 localhost podman[239764]: time="2025-11-23T09:31:37Z" level=info msg="Setting parallel job count to 25" Nov 23 04:31:37 localhost podman[239764]: time="2025-11-23T09:31:37Z" level=info msg="Using systemd socket activation to determine API endpoint" Nov 23 04:31:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32021 DF PROTO=TCP SPT=41638 DPT=9100 SEQ=2149339463 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7BC10F0000000001030307) Nov 23 04:31:37 localhost podman[239764]: time="2025-11-23T09:31:37Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"/run/podman/podman.sock\"" Nov 23 04:31:37 localhost podman[239764]: @ - - [23/Nov/2025:09:31:37 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1" Nov 23 04:31:37 localhost podman[239764]: time="2025-11-23T09:31:37Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:31:38 localhost systemd[1]: tmp-crun.kiM1Mf.mount: Deactivated successfully. Nov 23 04:31:38 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Nov 23 04:31:38 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 23 04:31:38 localhost python3.9[239912]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:31:38 localhost systemd[1]: Stopping podman_exporter container... Nov 23 04:31:38 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 23 04:31:38 localhost podman[239764]: @ - - [23/Nov/2025:09:31:37 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 0 "" "Go-http-client/1.1" Nov 23 04:31:38 localhost systemd[1]: libpod-a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.scope: Deactivated successfully. Nov 23 04:31:38 localhost podman[239916]: 2025-11-23 09:31:38.373948026 +0000 UTC m=+0.082361286 container died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:31:38 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.timer: Deactivated successfully. Nov 23 04:31:38 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:31:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8-userdata-shm.mount: Deactivated successfully. Nov 23 04:31:39 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:31:39 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Nov 23 04:31:39 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Nov 23 04:31:39 localhost podman[239916]: 2025-11-23 09:31:39.307439803 +0000 UTC m=+1.015853053 container cleanup a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:31:39 localhost podman[239916]: podman_exporter Nov 23 04:31:39 localhost podman[239930]: 2025-11-23 09:31:39.327787844 +0000 UTC m=+0.949913853 container cleanup a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:31:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25532 DF PROTO=TCP SPT=57038 DPT=9105 SEQ=3208692187 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7BCB0F0000000001030307) Nov 23 04:31:40 localhost systemd[1]: var-lib-containers-storage-overlay-de8598a48c704b4b44c0ad9a36f7df69a6320f02407a79838dd1715886b16432-merged.mount: Deactivated successfully. Nov 23 04:31:40 localhost systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Nov 23 04:31:40 localhost podman[239942]: 2025-11-23 09:31:40.117946485 +0000 UTC m=+0.077303753 container cleanup a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:31:40 localhost podman[239942]: podman_exporter Nov 23 04:31:40 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:31:40 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:31:40 localhost systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'. Nov 23 04:31:40 localhost systemd[1]: Stopped podman_exporter container. Nov 23 04:31:40 localhost systemd[1]: Starting podman_exporter container... Nov 23 04:31:41 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 23 04:31:41 localhost systemd[1]: var-lib-containers-storage-overlay-8642444f4e7def654fabd6c894b985d447e27b38f6a08db221eca03ebf9926cd-merged.mount: Deactivated successfully. Nov 23 04:31:41 localhost systemd[1]: Started libcrun container. Nov 23 04:31:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:31:41 localhost podman[239955]: 2025-11-23 09:31:41.339162438 +0000 UTC m=+0.901305338 container init a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:31:41 localhost podman_exporter[239970]: ts=2025-11-23T09:31:41.360Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)" Nov 23 04:31:41 localhost podman_exporter[239970]: ts=2025-11-23T09:31:41.360Z caller=exporter.go:69 level=info msg=metrics enhanced=false Nov 23 04:31:41 localhost podman[239764]: @ - - [23/Nov/2025:09:31:41 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1" Nov 23 04:31:41 localhost podman[239764]: time="2025-11-23T09:31:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:31:41 localhost podman_exporter[239970]: ts=2025-11-23T09:31:41.361Z caller=handler.go:94 level=info msg="enabled collectors" Nov 23 04:31:41 localhost podman_exporter[239970]: ts=2025-11-23T09:31:41.361Z caller=handler.go:105 level=info collector=container Nov 23 04:31:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:31:41 localhost podman[239955]: 2025-11-23 09:31:41.427905848 +0000 UTC m=+0.990048698 container start a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:31:41 localhost podman[239955]: podman_exporter Nov 23 04:31:41 localhost podman[239980]: 2025-11-23 09:31:41.463227448 +0000 UTC m=+0.082076157 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:31:41 localhost podman[239980]: 2025-11-23 09:31:41.499977994 +0000 UTC m=+0.118826683 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:31:41 localhost podman[239980]: unhealthy Nov 23 04:31:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:31:42 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Nov 23 04:31:42 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 23 04:31:42 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 23 04:31:43 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:31:43 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:31:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25533 DF PROTO=TCP SPT=57038 DPT=9105 SEQ=3208692187 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7BDACF0000000001030307) Nov 23 04:31:43 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:31:43 localhost systemd[1]: Started podman_exporter container. Nov 23 04:31:43 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:31:43 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Failed with result 'exit-code'. Nov 23 04:31:43 localhost podman[240003]: 2025-11-23 09:31:43.997368588 +0000 UTC m=+2.186796837 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 04:31:44 localhost podman[240003]: 2025-11-23 09:31:44.030388825 +0000 UTC m=+2.219817014 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:31:44 localhost podman[240003]: unhealthy Nov 23 04:31:44 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:31:44 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Nov 23 04:31:45 localhost python3.9[240130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:31:45 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Nov 23 04:31:45 localhost python3.9[240218]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890304.4896758-2054-233469850132067/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 23 04:31:45 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:31:45 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:31:46 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:31:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:31:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:31:46 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:31:46 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Failed with result 'exit-code'. Nov 23 04:31:46 localhost python3.9[240328]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False Nov 23 04:31:46 localhost podman[239764]: time="2025-11-23T09:31:46Z" level=error msg="Getting root fs size for \"0bb7f020b2feb1da86ce96d77a14ce6c3e55e8cc132cea441239bdaa9b52c262\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf: replacing mount point \"/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged\": device or resource busy" Nov 23 04:31:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:31:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:31:46 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:31:46 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:31:47 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:31:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65038 DF PROTO=TCP SPT=51628 DPT=9882 SEQ=4217141120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7BE7620000000001030307) Nov 23 04:31:47 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:31:47 localhost python3.9[240442]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 23 04:31:47 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:31:47 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 23 04:31:47 localhost systemd[1]: var-lib-containers-storage-overlay-8642444f4e7def654fabd6c894b985d447e27b38f6a08db221eca03ebf9926cd-merged.mount: Deactivated successfully. Nov 23 04:31:47 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:31:48 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:31:48 localhost python3[240603]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False Nov 23 04:31:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13983 DF PROTO=TCP SPT=33094 DPT=9882 SEQ=2994283448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7BEE0F0000000001030307) Nov 23 04:31:48 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:31:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:31:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:31:50 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:31:50 localhost systemd[1]: var-lib-containers-storage-overlay-1cf48a48241a85605ccaaed1dc6be1d7729c38cf732d2c362c3403cf7e38c508-merged.mount: Deactivated successfully. Nov 23 04:31:50 localhost systemd[1]: var-lib-containers-storage-overlay-1cf48a48241a85605ccaaed1dc6be1d7729c38cf732d2c362c3403cf7e38c508-merged.mount: Deactivated successfully. Nov 23 04:31:50 localhost podman[240630]: 2025-11-23 09:31:50.751802057 +0000 UTC m=+0.937417883 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:31:50 localhost podman[240631]: 2025-11-23 09:31:50.758082737 +0000 UTC m=+0.943355724 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:31:50 localhost podman[240630]: 2025-11-23 09:31:50.787390856 +0000 UTC m=+0.973006642 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:31:50 localhost podman[240631]: 2025-11-23 09:31:50.814825304 +0000 UTC m=+1.000098200 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Nov 23 04:31:52 localhost systemd[1]: var-lib-containers-storage-overlay-0ef5cc2b89c8a41c643bbf27e239c40ba42d2785cdc67bc5e5d4e7b894568a96-merged.mount: Deactivated successfully. Nov 23 04:31:52 localhost systemd[1]: var-lib-containers-storage-overlay-3e0acb8482e457c209ba092ee475b7db4ab9004896d495d3f7437d3c12bacbd3-merged.mount: Deactivated successfully. Nov 23 04:31:52 localhost systemd[1]: var-lib-containers-storage-overlay-3e0acb8482e457c209ba092ee475b7db4ab9004896d495d3f7437d3c12bacbd3-merged.mount: Deactivated successfully. Nov 23 04:31:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60266 DF PROTO=TCP SPT=53572 DPT=9101 SEQ=2070780540 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7BFC0F0000000001030307) Nov 23 04:31:53 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:31:53 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:31:53 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:31:53 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:31:53 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:31:55 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:31:55 localhost systemd[1]: var-lib-containers-storage-overlay-0ef5cc2b89c8a41c643bbf27e239c40ba42d2785cdc67bc5e5d4e7b894568a96-merged.mount: Deactivated successfully. Nov 23 04:31:55 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:31:55 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:31:55 localhost systemd[1]: var-lib-containers-storage-overlay-0ef5cc2b89c8a41c643bbf27e239c40ba42d2785cdc67bc5e5d4e7b894568a96-merged.mount: Deactivated successfully. Nov 23 04:31:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:31:56 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:31:56 localhost podman[240703]: 2025-11-23 09:31:56.420977363 +0000 UTC m=+0.103030027 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 04:31:56 localhost podman[240703]: 2025-11-23 09:31:56.429913669 +0000 UTC m=+0.111966343 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:31:56 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:31:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32945 DF PROTO=TCP SPT=57334 DPT=9102 SEQ=1210249994 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7C0C100000000001030307) Nov 23 04:31:56 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:31:56 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:31:57 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:31:57 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:31:58 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully. Nov 23 04:31:58 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:31:58 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:31:58 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:31:59 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:31:59 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:31:59 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:32:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12917 DF PROTO=TCP SPT=46134 DPT=9100 SEQ=48729753 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7C1A8E0000000001030307) Nov 23 04:32:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:32:00 localhost systemd[1]: var-lib-containers-storage-overlay-3e0acb8482e457c209ba092ee475b7db4ab9004896d495d3f7437d3c12bacbd3-merged.mount: Deactivated successfully. Nov 23 04:32:00 localhost podman[240749]: 2025-11-23 09:32:00.913293025 +0000 UTC m=+0.097955664 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 23 04:32:00 localhost podman[240749]: 2025-11-23 09:32:00.951321253 +0000 UTC m=+0.135983852 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:32:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12918 DF PROTO=TCP SPT=46134 DPT=9100 SEQ=48729753 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7C1E8F0000000001030307) Nov 23 04:32:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 04:32:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 5370 writes, 23K keys, 5370 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5370 writes, 735 syncs, 7.31 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.192 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.212 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.212 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.212 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.220 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.220 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.220 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.221 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.221 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.238 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.238 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.238 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.239 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.239 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:32:02 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:32:02 localhost systemd[1]: var-lib-containers-storage-overlay-1cf48a48241a85605ccaaed1dc6be1d7729c38cf732d2c362c3403cf7e38c508-merged.mount: Deactivated successfully. Nov 23 04:32:02 localhost systemd[1]: var-lib-containers-storage-overlay-1cf48a48241a85605ccaaed1dc6be1d7729c38cf732d2c362c3403cf7e38c508-merged.mount: Deactivated successfully. Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.697 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:32:02 localhost systemd[1]: var-lib-containers-storage-overlay-7f2203bb12e8263bda654cf994f0f457cee81cbd85ac1474f70f0dcdab850bfc-merged.mount: Deactivated successfully. Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.895 229711 WARNING nova.virt.libvirt.driver [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.897 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=13053MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.897 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.898 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.961 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.962 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:32:02 localhost nova_compute[229707]: 2025-11-23 09:32:02.977 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:32:03 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:32:03 localhost systemd[1]: var-lib-containers-storage-overlay-94bc28862446c9d52bea5cb761ece58f4b7ce0b6f4d30585ec973efe7d5007f7-merged.mount: Deactivated successfully. Nov 23 04:32:03 localhost nova_compute[229707]: 2025-11-23 09:32:03.478 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:32:03 localhost nova_compute[229707]: 2025-11-23 09:32:03.484 229711 DEBUG nova.compute.provider_tree [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:32:03 localhost nova_compute[229707]: 2025-11-23 09:32:03.497 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:32:03 localhost nova_compute[229707]: 2025-11-23 09:32:03.500 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:32:03 localhost nova_compute[229707]: 2025-11-23 09:32:03.500 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:32:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21319 DF PROTO=TCP SPT=34290 DPT=9100 SEQ=708702585 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7C2A100000000001030307) Nov 23 04:32:04 localhost nova_compute[229707]: 2025-11-23 09:32:04.225 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:32:04 localhost nova_compute[229707]: 2025-11-23 09:32:04.226 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:32:04 localhost nova_compute[229707]: 2025-11-23 09:32:04.226 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:32:04 localhost nova_compute[229707]: 2025-11-23 09:32:04.227 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:32:04 localhost nova_compute[229707]: 2025-11-23 09:32:04.228 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:32:04 localhost systemd[1]: var-lib-containers-storage-overlay-0ef5cc2b89c8a41c643bbf27e239c40ba42d2785cdc67bc5e5d4e7b894568a96-merged.mount: Deactivated successfully. Nov 23 04:32:04 localhost systemd[1]: var-lib-containers-storage-overlay-3e0acb8482e457c209ba092ee475b7db4ab9004896d495d3f7437d3c12bacbd3-merged.mount: Deactivated successfully. Nov 23 04:32:04 localhost sshd[240821]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:32:04 localhost systemd[1]: var-lib-containers-storage-overlay-3e0acb8482e457c209ba092ee475b7db4ab9004896d495d3f7437d3c12bacbd3-merged.mount: Deactivated successfully. Nov 23 04:32:05 localhost systemd[1]: var-lib-containers-storage-overlay-cb3e7a7b413bc69102c7d8435b32176125a43d794f5f87a0ef3f45710221e344-merged.mount: Deactivated successfully. Nov 23 04:32:05 localhost systemd[1]: var-lib-containers-storage-overlay-7f2203bb12e8263bda654cf994f0f457cee81cbd85ac1474f70f0dcdab850bfc-merged.mount: Deactivated successfully. Nov 23 04:32:05 localhost systemd[1]: var-lib-containers-storage-overlay-7f2203bb12e8263bda654cf994f0f457cee81cbd85ac1474f70f0dcdab850bfc-merged.mount: Deactivated successfully. Nov 23 04:32:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 04:32:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 5205 writes, 23K keys, 5205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5205 writes, 665 syncs, 7.83 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 04:32:06 localhost systemd[1]: var-lib-containers-storage-overlay-0ef5cc2b89c8a41c643bbf27e239c40ba42d2785cdc67bc5e5d4e7b894568a96-merged.mount: Deactivated successfully. Nov 23 04:32:07 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:32:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12920 DF PROTO=TCP SPT=46134 DPT=9100 SEQ=48729753 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7C364F0000000001030307) Nov 23 04:32:07 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully. Nov 23 04:32:07 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:32:07 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:32:07 localhost systemd[1]: var-lib-containers-storage-overlay-cb3e7a7b413bc69102c7d8435b32176125a43d794f5f87a0ef3f45710221e344-merged.mount: Deactivated successfully. Nov 23 04:32:07 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:32:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:32:08 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully. Nov 23 04:32:08 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:32:08 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:32:09 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:32:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:32:09.713 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:32:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:32:09.714 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:32:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:32:09.714 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:32:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46744 DF PROTO=TCP SPT=51808 DPT=9105 SEQ=752977625 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7C40100000000001030307) Nov 23 04:32:10 localhost systemd[1]: var-lib-containers-storage-overlay-3e0acb8482e457c209ba092ee475b7db4ab9004896d495d3f7437d3c12bacbd3-merged.mount: Deactivated successfully. Nov 23 04:32:10 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully. Nov 23 04:32:10 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:32:10 localhost podman[240654]: 2025-11-23 09:31:50.741317171 +0000 UTC m=+0.062918204 image pull quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 Nov 23 04:32:12 localhost systemd[1]: var-lib-containers-storage-overlay-94bc28862446c9d52bea5cb761ece58f4b7ce0b6f4d30585ec973efe7d5007f7-merged.mount: Deactivated successfully. Nov 23 04:32:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46745 DF PROTO=TCP SPT=51808 DPT=9105 SEQ=752977625 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7C4FD00000000001030307) Nov 23 04:32:14 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:32:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:32:14 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 23 04:32:14 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 23 04:32:14 localhost systemd[1]: var-lib-containers-storage-overlay-7f2203bb12e8263bda654cf994f0f457cee81cbd85ac1474f70f0dcdab850bfc-merged.mount: Deactivated successfully. Nov 23 04:32:15 localhost podman[240860]: 2025-11-23 09:32:15.111620684 +0000 UTC m=+0.385033801 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:32:15 localhost podman[240847]: 2025-11-23 09:32:13.115213328 +0000 UTC m=+0.040014461 image pull quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 Nov 23 04:32:15 localhost podman[240860]: 2025-11-23 09:32:15.120968082 +0000 UTC m=+0.394381209 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:32:15 localhost podman[240860]: unhealthy Nov 23 04:32:15 localhost systemd[1]: var-lib-containers-storage-overlay-94bc28862446c9d52bea5cb761ece58f4b7ce0b6f4d30585ec973efe7d5007f7-merged.mount: Deactivated successfully. Nov 23 04:32:16 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:32:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:32:16 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:32:16 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:32:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56068 DF PROTO=TCP SPT=34088 DPT=9882 SEQ=2153358886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7C5C920000000001030307) Nov 23 04:32:17 localhost systemd[1]: var-lib-containers-storage-overlay-cb3e7a7b413bc69102c7d8435b32176125a43d794f5f87a0ef3f45710221e344-merged.mount: Deactivated successfully. Nov 23 04:32:17 localhost systemd[1]: var-lib-containers-storage-overlay-7f2203bb12e8263bda654cf994f0f457cee81cbd85ac1474f70f0dcdab850bfc-merged.mount: Deactivated successfully. Nov 23 04:32:17 localhost systemd[1]: var-lib-containers-storage-overlay-7f2203bb12e8263bda654cf994f0f457cee81cbd85ac1474f70f0dcdab850bfc-merged.mount: Deactivated successfully. Nov 23 04:32:17 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:32:17 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Failed with result 'exit-code'. Nov 23 04:32:17 localhost podman[240881]: 2025-11-23 09:32:17.432750728 +0000 UTC m=+0.921833035 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Nov 23 04:32:17 localhost podman[240881]: 2025-11-23 09:32:17.463378158 +0000 UTC m=+0.952460495 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=edpm) Nov 23 04:32:17 localhost podman[240881]: unhealthy Nov 23 04:32:18 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:32:18 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:32:18 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:32:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65043 DF PROTO=TCP SPT=51628 DPT=9882 SEQ=4217141120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7C640F0000000001030307) Nov 23 04:32:19 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:32:19 localhost systemd[1]: var-lib-containers-storage-overlay-cb3e7a7b413bc69102c7d8435b32176125a43d794f5f87a0ef3f45710221e344-merged.mount: Deactivated successfully. Nov 23 04:32:19 localhost podman[240847]: Nov 23 04:32:19 localhost podman[240847]: 2025-11-23 09:32:19.313339486 +0000 UTC m=+6.238140569 container create 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., vcs-type=git, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Nov 23 04:32:19 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:32:19 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Failed with result 'exit-code'. Nov 23 04:32:19 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:32:20 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:32:20 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:32:20 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:32:20 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:32:20 localhost python3[240603]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 Nov 23 04:32:21 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully. Nov 23 04:32:21 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:32:22 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 23 04:32:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46746 DF PROTO=TCP SPT=51808 DPT=9105 SEQ=752977625 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7C700F0000000001030307) Nov 23 04:32:22 localhost systemd[1]: var-lib-containers-storage-overlay-3a88026e8435ee6e4a9cdaa4ab5e7c8d8b76dc6fc1517ed344c4771e775bf72d-merged.mount: Deactivated successfully. Nov 23 04:32:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:32:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:32:23 localhost systemd[1]: var-lib-containers-storage-overlay-94bc28862446c9d52bea5cb761ece58f4b7ce0b6f4d30585ec973efe7d5007f7-merged.mount: Deactivated successfully. Nov 23 04:32:24 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 23 04:32:24 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Nov 23 04:32:24 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Nov 23 04:32:24 localhost podman[240923]: 2025-11-23 09:32:24.662896229 +0000 UTC m=+0.837821009 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Nov 23 04:32:24 localhost podman[240923]: 2025-11-23 09:32:24.702352307 +0000 UTC m=+0.877277157 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Nov 23 04:32:24 localhost podman[240922]: 2025-11-23 09:32:24.7052807 +0000 UTC m=+0.883856285 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Nov 23 04:32:24 localhost podman[240922]: 2025-11-23 09:32:24.785812247 +0000 UTC m=+0.964387862 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Nov 23 04:32:25 localhost python3.9[241071]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:32:26 localhost python3.9[241183]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:32:26 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:32:26 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 23 04:32:26 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 23 04:32:26 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:32:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2051 DF PROTO=TCP SPT=43830 DPT=9102 SEQ=3911886751 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7C814F0000000001030307) Nov 23 04:32:26 localhost python3.9[241292]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890346.066849-2213-232240732634285/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:32:26 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:32:26 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:32:26 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:32:26 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:32:27 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 23 04:32:27 localhost python3.9[241347]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:32:27 localhost systemd[1]: Reloading. Nov 23 04:32:27 localhost systemd-rc-local-generator[241374]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:32:27 localhost systemd-sysv-generator[241377]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:32:28 localhost python3.9[241437]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:32:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:32:28 localhost systemd[1]: Reloading. Nov 23 04:32:28 localhost systemd-rc-local-generator[241475]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:32:28 localhost systemd-sysv-generator[241478]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:32:28 localhost podman[241439]: 2025-11-23 09:32:28.190032875 +0000 UTC m=+0.119994567 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:32:28 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:32:28 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:32:28 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:32:28 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:32:28 localhost podman[241439]: 2025-11-23 09:32:28.220737086 +0000 UTC m=+0.150698758 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:32:28 localhost podman[239764]: time="2025-11-23T09:32:28Z" level=error msg="Getting root fs size for \"2b36fc47679dff5b8c0ba69736d06cab59efafdcb7422ee55e210887be303a2a\": getting diffsize of layer \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\" and its parent \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\": unmounting layer 3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae: replacing mount point \"/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/merged\": device or resource busy" Nov 23 04:32:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:32:28 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:32:28 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:32:28 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:32:28 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:32:28 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:32:28 localhost systemd[1]: Starting openstack_network_exporter container... Nov 23 04:32:28 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:32:28 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:32:28 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:32:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7950 DF PROTO=TCP SPT=43684 DPT=9100 SEQ=268061537 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7C8FBD0000000001030307) Nov 23 04:32:30 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Nov 23 04:32:30 localhost systemd[1]: var-lib-containers-storage-overlay-b82f11e702c10ae17f894fa5e812d3704c88386aaaee36f2042dd2d177fa374d-merged.mount: Deactivated successfully. Nov 23 04:32:30 localhost systemd[1]: var-lib-containers-storage-overlay-b82f11e702c10ae17f894fa5e812d3704c88386aaaee36f2042dd2d177fa374d-merged.mount: Deactivated successfully. Nov 23 04:32:30 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:32:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7951 DF PROTO=TCP SPT=43684 DPT=9100 SEQ=268061537 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7C93D00000000001030307) Nov 23 04:32:31 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 23 04:32:31 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 23 04:32:31 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:32:32 localhost systemd[1]: Started libcrun container. Nov 23 04:32:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd522977c39c68223bc447dc73302c42f966a5d86d4bc46978664b5cc234800/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Nov 23 04:32:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd522977c39c68223bc447dc73302c42f966a5d86d4bc46978664b5cc234800/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff) Nov 23 04:32:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:32:32 localhost podman[241500]: 2025-11-23 09:32:32.047215069 +0000 UTC m=+3.642013740 container init 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, config_id=edpm, com.redhat.component=ubi9-minimal-container, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible) Nov 23 04:32:32 localhost openstack_network_exporter[241515]: INFO 09:32:32 main.go:48: registering *bridge.Collector Nov 23 04:32:32 localhost openstack_network_exporter[241515]: INFO 09:32:32 main.go:48: registering *coverage.Collector Nov 23 04:32:32 localhost openstack_network_exporter[241515]: INFO 09:32:32 main.go:48: registering *datapath.Collector Nov 23 04:32:32 localhost openstack_network_exporter[241515]: INFO 09:32:32 main.go:48: registering *iface.Collector Nov 23 04:32:32 localhost openstack_network_exporter[241515]: INFO 09:32:32 main.go:48: registering *memory.Collector Nov 23 04:32:32 localhost openstack_network_exporter[241515]: INFO 09:32:32 main.go:48: registering *ovnnorthd.Collector Nov 23 04:32:32 localhost openstack_network_exporter[241515]: INFO 09:32:32 main.go:48: registering *ovn.Collector Nov 23 04:32:32 localhost openstack_network_exporter[241515]: INFO 09:32:32 main.go:48: registering *ovsdbserver.Collector Nov 23 04:32:32 localhost openstack_network_exporter[241515]: INFO 09:32:32 main.go:48: registering *pmd_perf.Collector Nov 23 04:32:32 localhost openstack_network_exporter[241515]: INFO 09:32:32 main.go:48: registering *pmd_rxq.Collector Nov 23 04:32:32 localhost openstack_network_exporter[241515]: INFO 09:32:32 main.go:48: registering *vswitch.Collector Nov 23 04:32:32 localhost openstack_network_exporter[241515]: NOTICE 09:32:32 main.go:82: listening on http://:9105/metrics Nov 23 04:32:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:32:32 localhost podman[241500]: 2025-11-23 09:32:32.082737053 +0000 UTC m=+3.677535724 container start 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container) Nov 23 04:32:32 localhost podman[241500]: openstack_network_exporter Nov 23 04:32:32 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:32:32 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:32:32 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:32:32 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:32:32 localhost systemd[1]: Started openstack_network_exporter container. Nov 23 04:32:32 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:32:32 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:32:32 localhost podman[241526]: 2025-11-23 09:32:32.748677965 +0000 UTC m=+0.662475074 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, version=9.6, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers) Nov 23 04:32:32 localhost podman[241526]: 2025-11-23 09:32:32.776432313 +0000 UTC m=+0.690229412 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, version=9.6, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers) Nov 23 04:32:33 localhost podman[239764]: time="2025-11-23T09:32:33Z" level=error msg="Getting root fs size for \"2f9868e7d3657746b7fdc2340c611309cd7b6ce6715437f9d30f58b02abe9893\": unmounting layer c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6: replacing mount point \"/var/lib/containers/storage/overlay/c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6/merged\": device or resource busy" Nov 23 04:32:33 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:32:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:32:33 localhost podman[241565]: 2025-11-23 09:32:33.396013269 +0000 UTC m=+0.085154115 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:32:33 localhost podman[241565]: 2025-11-23 09:32:33.411295872 +0000 UTC m=+0.100436728 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 04:32:33 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 23 04:32:33 localhost systemd[1]: var-lib-containers-storage-overlay-ab41077e04905cd2ed47da0e447cf096133dba9a29e9494f8fcc86ce48952daa-merged.mount: Deactivated successfully. Nov 23 04:32:34 localhost python3.9[241676]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:32:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32024 DF PROTO=TCP SPT=41638 DPT=9100 SEQ=2149339463 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7CA00F0000000001030307) Nov 23 04:32:34 localhost systemd[1]: Stopping openstack_network_exporter container... Nov 23 04:32:34 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 23 04:32:34 localhost systemd[1]: var-lib-containers-storage-overlay-3a88026e8435ee6e4a9cdaa4ab5e7c8d8b76dc6fc1517ed344c4771e775bf72d-merged.mount: Deactivated successfully. Nov 23 04:32:34 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:32:34 localhost systemd[1]: libpod-0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.scope: Deactivated successfully. Nov 23 04:32:34 localhost podman[241680]: 2025-11-23 09:32:34.743790616 +0000 UTC m=+0.326756846 container died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-type=git, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 04:32:34 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.timer: Deactivated successfully. Nov 23 04:32:34 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:32:35 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9-userdata-shm.mount: Deactivated successfully. Nov 23 04:32:36 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 23 04:32:36 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Nov 23 04:32:37 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Nov 23 04:32:37 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:32:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7953 DF PROTO=TCP SPT=43684 DPT=9100 SEQ=268061537 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7CAB8F0000000001030307) Nov 23 04:32:37 localhost podman[241680]: 2025-11-23 09:32:37.571944033 +0000 UTC m=+3.154910243 container cleanup 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, distribution-scope=public) Nov 23 04:32:37 localhost podman[241680]: openstack_network_exporter Nov 23 04:32:37 localhost podman[241693]: 2025-11-23 09:32:37.592184334 +0000 UTC m=+2.845688514 container cleanup 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9) Nov 23 04:32:37 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:32:37 localhost systemd[1]: var-lib-containers-storage-overlay-7dd522977c39c68223bc447dc73302c42f966a5d86d4bc46978664b5cc234800-merged.mount: Deactivated successfully. Nov 23 04:32:39 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:32:39 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:32:39 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 23 04:32:39 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:32:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7908 DF PROTO=TCP SPT=35570 DPT=9105 SEQ=1288260410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7CB54F0000000001030307) Nov 23 04:32:39 localhost systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Nov 23 04:32:39 localhost podman[241705]: 2025-11-23 09:32:39.961011654 +0000 UTC m=+0.072485803 container cleanup 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, release=1755695350, distribution-scope=public, name=ubi9-minimal) Nov 23 04:32:39 localhost podman[241705]: openstack_network_exporter Nov 23 04:32:40 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:32:40 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:32:41 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:32:41 localhost systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'. Nov 23 04:32:41 localhost systemd[1]: Stopped openstack_network_exporter container. Nov 23 04:32:41 localhost systemd[1]: Starting openstack_network_exporter container... Nov 23 04:32:41 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:32:41 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:32:41 localhost systemd[1]: Started libcrun container. Nov 23 04:32:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd522977c39c68223bc447dc73302c42f966a5d86d4bc46978664b5cc234800/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Nov 23 04:32:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd522977c39c68223bc447dc73302c42f966a5d86d4bc46978664b5cc234800/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff) Nov 23 04:32:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:32:42 localhost podman[241718]: 2025-11-23 09:32:42.026499591 +0000 UTC m=+0.820593234 container init 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, container_name=openstack_network_exporter, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.expose-services=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 04:32:42 localhost openstack_network_exporter[241732]: INFO 09:32:42 main.go:48: registering *bridge.Collector Nov 23 04:32:42 localhost openstack_network_exporter[241732]: INFO 09:32:42 main.go:48: registering *coverage.Collector Nov 23 04:32:42 localhost openstack_network_exporter[241732]: INFO 09:32:42 main.go:48: registering *datapath.Collector Nov 23 04:32:42 localhost openstack_network_exporter[241732]: INFO 09:32:42 main.go:48: registering *iface.Collector Nov 23 04:32:42 localhost openstack_network_exporter[241732]: INFO 09:32:42 main.go:48: registering *memory.Collector Nov 23 04:32:42 localhost openstack_network_exporter[241732]: INFO 09:32:42 main.go:48: registering *ovnnorthd.Collector Nov 23 04:32:42 localhost openstack_network_exporter[241732]: INFO 09:32:42 main.go:48: registering *ovn.Collector Nov 23 04:32:42 localhost openstack_network_exporter[241732]: INFO 09:32:42 main.go:48: registering *ovsdbserver.Collector Nov 23 04:32:42 localhost openstack_network_exporter[241732]: INFO 09:32:42 main.go:48: registering *pmd_perf.Collector Nov 23 04:32:42 localhost openstack_network_exporter[241732]: INFO 09:32:42 main.go:48: registering *pmd_rxq.Collector Nov 23 04:32:42 localhost openstack_network_exporter[241732]: INFO 09:32:42 main.go:48: registering *vswitch.Collector Nov 23 04:32:42 localhost openstack_network_exporter[241732]: NOTICE 09:32:42 main.go:82: listening on http://:9105/metrics Nov 23 04:32:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:32:42 localhost podman[241718]: 2025-11-23 09:32:42.06724219 +0000 UTC m=+0.861335833 container start 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Nov 23 04:32:42 localhost podman[241718]: openstack_network_exporter Nov 23 04:32:42 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:32:42 localhost systemd[1]: Started openstack_network_exporter container. Nov 23 04:32:42 localhost podman[241742]: 2025-11-23 09:32:42.399831389 +0000 UTC m=+0.326866909 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, release=1755695350, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container) Nov 23 04:32:42 localhost podman[241742]: 2025-11-23 09:32:42.408637307 +0000 UTC m=+0.335672787 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 04:32:43 localhost systemd[1]: tmp-crun.Z6wufi.mount: Deactivated successfully. Nov 23 04:32:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7909 DF PROTO=TCP SPT=35570 DPT=9105 SEQ=1288260410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7CC50F0000000001030307) Nov 23 04:32:44 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Nov 23 04:32:44 localhost systemd[1]: var-lib-containers-storage-overlay-b82f11e702c10ae17f894fa5e812d3704c88386aaaee36f2042dd2d177fa374d-merged.mount: Deactivated successfully. Nov 23 04:32:44 localhost systemd[1]: var-lib-containers-storage-overlay-b82f11e702c10ae17f894fa5e812d3704c88386aaaee36f2042dd2d177fa374d-merged.mount: Deactivated successfully. Nov 23 04:32:44 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:32:45 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:32:45 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:32:45 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:32:45 localhost podman[239764]: time="2025-11-23T09:32:45Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966/merged: invalid argument" Nov 23 04:32:45 localhost podman[239764]: time="2025-11-23T09:32:45Z" level=error msg="Getting root fs size for \"36efc4ea1ac1592299c7269c33478b2e4d325ab185e852795f3eaa0ef1cc2de5\": getting diffsize of layer \"a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966\" and its parent \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\": creating overlay mount to /var/lib/containers/storage/overlay/a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/MKSV6N37Q3TVCBD4QIGHCALG6Y:/var/lib/containers/storage/overlay/l/UWKSHRGV4S6O6XDE2QFRM5ZKX7,upperdir=/var/lib/containers/storage/overlay/a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966/diff,workdir=/var/lib/containers/storage/overlay/a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966/work,nodev,metacopy=on\": no such file or directory" Nov 23 04:32:45 localhost systemd[1]: var-lib-containers-storage-overlay-5772d18a828f26c879d25d0b8106c8b9bfd3703359b5039935e3e876ecde7403-merged.mount: Deactivated successfully. Nov 23 04:32:46 localhost systemd[1]: var-lib-containers-storage-overlay-b513f0c97c82d1e5153446ab8d6ba6a01710ca2380b841dfea84b567972b770b-merged.mount: Deactivated successfully. Nov 23 04:32:46 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:32:46 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 23 04:32:47 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:32:47 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:32:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33723 DF PROTO=TCP SPT=52064 DPT=9882 SEQ=3032484333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7CD1C30000000001030307) Nov 23 04:32:47 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:32:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:32:47 localhost podman[241778]: 2025-11-23 09:32:47.65853721 +0000 UTC m=+0.089799461 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:32:47 localhost podman[241778]: 2025-11-23 09:32:47.668437422 +0000 UTC m=+0.099699673 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:32:47 localhost podman[241778]: unhealthy Nov 23 04:32:48 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 23 04:32:48 localhost systemd[1]: var-lib-containers-storage-overlay-ab41077e04905cd2ed47da0e447cf096133dba9a29e9494f8fcc86ce48952daa-merged.mount: Deactivated successfully. Nov 23 04:32:48 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:32:49 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:32:49 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:32:49 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:32:49 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Failed with result 'exit-code'. Nov 23 04:32:49 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:32:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10434 DF PROTO=TCP SPT=52134 DPT=9102 SEQ=2600517548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7CDAA30000000001030307) Nov 23 04:32:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:32:49 localhost systemd[1]: tmp-crun.AZVj4x.mount: Deactivated successfully. Nov 23 04:32:49 localhost podman[241799]: 2025-11-23 09:32:49.681856693 +0000 UTC m=+0.113883354 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:32:49 localhost podman[241799]: 2025-11-23 09:32:49.713478822 +0000 UTC m=+0.145505443 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute) Nov 23 04:32:49 localhost podman[241799]: unhealthy Nov 23 04:32:51 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:32:51 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:32:51 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:32:52 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:32:52 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:32:52 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:32:52 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Failed with result 'exit-code'. Nov 23 04:32:52 localhost python3.9[241962]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 23 04:32:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7910 DF PROTO=TCP SPT=35570 DPT=9105 SEQ=1288260410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7CE60F0000000001030307) Nov 23 04:32:53 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:32:53 localhost python3.9[242072]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman Nov 23 04:32:53 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:32:53 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:32:54 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:32:54 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:32:55 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:32:55 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:32:55 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:32:55 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:32:55 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:32:56 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:32:56 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:32:56 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:32:56 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:32:56 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:32:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10437 DF PROTO=TCP SPT=52134 DPT=9102 SEQ=2600517548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7CF6500000000001030307) Nov 23 04:32:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:32:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:32:57 localhost podman[242231]: 2025-11-23 09:32:57.684185518 +0000 UTC m=+0.095517132 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118) Nov 23 04:32:57 localhost podman[242231]: 2025-11-23 09:32:57.690931811 +0000 UTC m=+0.102263395 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Nov 23 04:32:57 localhost python3.9[242232]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Nov 23 04:32:58 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:32:58 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:32:59 localhost systemd[1]: var-lib-containers-storage-overlay-5772d18a828f26c879d25d0b8106c8b9bfd3703359b5039935e3e876ecde7403-merged.mount: Deactivated successfully. Nov 23 04:32:59 localhost systemd[1]: var-lib-containers-storage-overlay-6c7788a714f783bcd626d46acd079248804e4957a18f46d9597110fed4f235e9-merged.mount: Deactivated successfully. Nov 23 04:32:59 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:32:59 localhost podman[242233]: 2025-11-23 09:32:59.382632026 +0000 UTC m=+1.789147988 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 04:32:59 localhost systemd[1]: Started libpod-conmon-900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.scope. Nov 23 04:32:59 localhost podman[242260]: 2025-11-23 09:32:59.455295385 +0000 UTC m=+1.648926114 container exec 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:32:59 localhost podman[242233]: 2025-11-23 09:32:59.459332862 +0000 UTC m=+1.865848824 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 23 04:32:59 localhost podman[242260]: 2025-11-23 09:32:59.489383853 +0000 UTC m=+1.683014542 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:32:59 localhost systemd[1]: var-lib-containers-storage-overlay-5772d18a828f26c879d25d0b8106c8b9bfd3703359b5039935e3e876ecde7403-merged.mount: Deactivated successfully. Nov 23 04:33:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19267 DF PROTO=TCP SPT=60830 DPT=9100 SEQ=2161665102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7D04EE0000000001030307) Nov 23 04:33:00 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:00 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 23 04:33:00 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 23 04:33:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:33:00 localhost nova_compute[229707]: 2025-11-23 09:33:00.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:33:00 localhost nova_compute[229707]: 2025-11-23 09:33:00.947 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:33:00 localhost nova_compute[229707]: 2025-11-23 09:33:00.947 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:33:00 localhost nova_compute[229707]: 2025-11-23 09:33:00.969 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:33:00 localhost nova_compute[229707]: 2025-11-23 09:33:00.969 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:33:00 localhost nova_compute[229707]: 2025-11-23 09:33:00.970 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:33:00 localhost nova_compute[229707]: 2025-11-23 09:33:00.970 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:33:00 localhost nova_compute[229707]: 2025-11-23 09:33:00.970 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:33:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19268 DF PROTO=TCP SPT=60830 DPT=9100 SEQ=2161665102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7D090F0000000001030307) Nov 23 04:33:01 localhost nova_compute[229707]: 2025-11-23 09:33:01.433 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:33:01 localhost nova_compute[229707]: 2025-11-23 09:33:01.634 229711 WARNING nova.virt.libvirt.driver [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:33:01 localhost nova_compute[229707]: 2025-11-23 09:33:01.637 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=13051MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:33:01 localhost nova_compute[229707]: 2025-11-23 09:33:01.637 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:33:01 localhost nova_compute[229707]: 2025-11-23 09:33:01.638 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:33:01 localhost nova_compute[229707]: 2025-11-23 09:33:01.710 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:33:01 localhost nova_compute[229707]: 2025-11-23 09:33:01.710 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:33:01 localhost nova_compute[229707]: 2025-11-23 09:33:01.726 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:33:01 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:33:01 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:33:02 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:33:02 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:33:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:02 localhost podman[242301]: 2025-11-23 09:33:02.211191147 +0000 UTC m=+1.394933899 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:33:02 localhost nova_compute[229707]: 2025-11-23 09:33:02.233 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:33:02 localhost nova_compute[229707]: 2025-11-23 09:33:02.239 229711 DEBUG nova.compute.provider_tree [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:33:02 localhost podman[242301]: 2025-11-23 09:33:02.246324178 +0000 UTC m=+1.430066910 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:33:02 localhost nova_compute[229707]: 2025-11-23 09:33:02.251 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:33:02 localhost nova_compute[229707]: 2025-11-23 09:33:02.253 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:33:02 localhost nova_compute[229707]: 2025-11-23 09:33:02.254 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:33:02 localhost python3.9[242477]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Nov 23 04:33:02 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:33:02 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:02 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:03 localhost nova_compute[229707]: 2025-11-23 09:33:03.254 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:33:03 localhost nova_compute[229707]: 2025-11-23 09:33:03.255 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:33:03 localhost nova_compute[229707]: 2025-11-23 09:33:03.255 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:33:03 localhost nova_compute[229707]: 2025-11-23 09:33:03.267 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:33:03 localhost nova_compute[229707]: 2025-11-23 09:33:03.267 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:33:03 localhost nova_compute[229707]: 2025-11-23 09:33:03.267 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:33:03 localhost nova_compute[229707]: 2025-11-23 09:33:03.268 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:33:03 localhost nova_compute[229707]: 2025-11-23 09:33:03.268 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:33:03 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:03 localhost nova_compute[229707]: 2025-11-23 09:33:03.947 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:33:03 localhost nova_compute[229707]: 2025-11-23 09:33:03.948 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:33:03 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:33:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12923 DF PROTO=TCP SPT=46134 DPT=9100 SEQ=48729753 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7D140F0000000001030307) Nov 23 04:33:04 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:33:04 localhost systemd[1]: libpod-conmon-900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.scope: Deactivated successfully. Nov 23 04:33:04 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:04 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:04 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:33:04 localhost systemd[1]: Started libpod-conmon-900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.scope. Nov 23 04:33:04 localhost podman[242478]: 2025-11-23 09:33:04.422129024 +0000 UTC m=+1.567950612 container exec 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller) Nov 23 04:33:04 localhost podman[242478]: 2025-11-23 09:33:04.431967745 +0000 UTC m=+1.577789313 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:33:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:33:04 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:33:05 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:05 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:05 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:05 localhost podman[242507]: 2025-11-23 09:33:05.694063643 +0000 UTC m=+0.878124335 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:33:05 localhost podman[242507]: 2025-11-23 09:33:05.71040084 +0000 UTC m=+0.894461532 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, container_name=multipathd) Nov 23 04:33:06 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:33:06 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 23 04:33:06 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:06 localhost python3.9[242636]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:33:06 localhost systemd[1]: var-lib-containers-storage-overlay-b513f0c97c82d1e5153446ab8d6ba6a01710ca2380b841dfea84b567972b770b-merged.mount: Deactivated successfully. Nov 23 04:33:06 localhost systemd[1]: libpod-conmon-900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.scope: Deactivated successfully. Nov 23 04:33:06 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:06 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:06 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:33:07 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:33:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:07 localhost python3.9[242746]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman Nov 23 04:33:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19270 DF PROTO=TCP SPT=60830 DPT=9100 SEQ=2161665102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7D20D00000000001030307) Nov 23 04:33:09 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:33:09 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:33:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:33:09.714 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:33:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:33:09.714 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:33:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:33:09.715 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:33:09 localhost systemd[1]: var-lib-containers-storage-overlay-bd7c294b0b164ca1fe098d3dd445b687fad9bf3828ad00cdfc5c993d3ad1f67f-merged.mount: Deactivated successfully. Nov 23 04:33:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46237 DF PROTO=TCP SPT=35948 DPT=9105 SEQ=1040213924 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7D2A8F0000000001030307) Nov 23 04:33:11 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 23 04:33:11 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 23 04:33:11 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:11 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:33:12 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.571 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.572 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.572 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.572 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:33:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:33:12 localhost python3.9[242869]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Nov 23 04:33:13 localhost systemd[1]: Started libpod-conmon-219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.scope. Nov 23 04:33:13 localhost podman[242870]: 2025-11-23 09:33:13.117132728 +0000 UTC m=+0.136228540 container exec 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:33:13 localhost podman[242870]: 2025-11-23 09:33:13.149406008 +0000 UTC m=+0.168501850 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Nov 23 04:33:13 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:13 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:13 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:13 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:13 localhost podman[239764]: time="2025-11-23T09:33:13Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged: invalid argument" Nov 23 04:33:13 localhost podman[239764]: time="2025-11-23T09:33:13Z" level=error msg="Getting root fs size for \"4a4a0ffd3bb03912fba6f6708c66612cdb602e73e5bdedf4fccb64b352501442\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": creating overlay mount to /var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/UWKSHRGV4S6O6XDE2QFRM5ZKX7,upperdir=/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/diff,workdir=/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/work,nodev,metacopy=on\": no such file or directory" Nov 23 04:33:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46238 DF PROTO=TCP SPT=35948 DPT=9105 SEQ=1040213924 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7D3A4F0000000001030307) Nov 23 04:33:14 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:33:14 localhost python3.9[243008]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Nov 23 04:33:14 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:14 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:14 localhost systemd[1]: libpod-conmon-219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.scope: Deactivated successfully. Nov 23 04:33:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:14 localhost systemd[1]: Started libpod-conmon-219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.scope. Nov 23 04:33:14 localhost podman[243009]: 2025-11-23 09:33:14.801106588 +0000 UTC m=+0.607036081 container exec 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Nov 23 04:33:14 localhost podman[243009]: 2025-11-23 09:33:14.832378147 +0000 UTC m=+0.638307600 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true) Nov 23 04:33:15 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 23 04:33:15 localhost systemd[1]: var-lib-containers-storage-overlay-01a78c4dbe79ec91dd17b016c5deea447203862889bb5c5908b4f002506791c6-merged.mount: Deactivated successfully. Nov 23 04:33:15 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:33:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:33:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19271 DF PROTO=TCP SPT=60830 DPT=9100 SEQ=2161665102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7D42100000000001030307) Nov 23 04:33:16 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:33:16 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 23 04:33:16 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 23 04:33:16 localhost podman[243038]: 2025-11-23 09:33:16.534943315 +0000 UTC m=+0.717419231 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc.) Nov 23 04:33:16 localhost podman[243038]: 2025-11-23 09:33:16.544326642 +0000 UTC m=+0.726802538 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.openshift.expose-services=, config_id=edpm, version=9.6, build-date=2025-08-20T13:12:41, architecture=x86_64, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9) Nov 23 04:33:17 localhost python3.9[243167]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:33:17 localhost python3.9[243277]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman Nov 23 04:33:18 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:18 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:33:18 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:33:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33728 DF PROTO=TCP SPT=52064 DPT=9882 SEQ=3032484333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7D4E0F0000000001030307) Nov 23 04:33:18 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:33:19 localhost systemd[1]: var-lib-containers-storage-overlay-6c7788a714f783bcd626d46acd079248804e4957a18f46d9597110fed4f235e9-merged.mount: Deactivated successfully. Nov 23 04:33:19 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:19 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:19 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:33:19 localhost systemd[1]: libpod-conmon-219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.scope: Deactivated successfully. Nov 23 04:33:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:33:20 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:20 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:20 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:21 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:33:21 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:33:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46239 DF PROTO=TCP SPT=35948 DPT=9105 SEQ=1040213924 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7D5A0F0000000001030307) Nov 23 04:33:22 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:33:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:22 localhost podman[243291]: 2025-11-23 09:33:22.118249602 +0000 UTC m=+2.296313758 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:33:22 localhost podman[243291]: 2025-11-23 09:33:22.156312986 +0000 UTC m=+2.334377142 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:33:22 localhost podman[243291]: unhealthy Nov 23 04:33:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:33:22 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:33:22 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:22 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:24 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:24 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:33:24 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:33:24 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:33:24 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Failed with result 'exit-code'. Nov 23 04:33:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:24 localhost podman[243315]: 2025-11-23 09:33:24.63812465 +0000 UTC m=+2.065424866 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:33:24 localhost podman[243315]: 2025-11-23 09:33:24.674050019 +0000 UTC m=+2.101350245 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Nov 23 04:33:24 localhost podman[243315]: unhealthy Nov 23 04:33:25 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:33:25 localhost python3.9[243442]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Nov 23 04:33:25 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:25 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:25 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:25 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:33:25 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Failed with result 'exit-code'. Nov 23 04:33:25 localhost systemd[1]: Started libpod-conmon-7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.scope. Nov 23 04:33:25 localhost podman[243443]: 2025-11-23 09:33:25.836692872 +0000 UTC m=+0.531523577 container exec 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd) Nov 23 04:33:25 localhost podman[243443]: 2025-11-23 09:33:25.841474403 +0000 UTC m=+0.536305148 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Nov 23 04:33:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37602 DF PROTO=TCP SPT=36460 DPT=9102 SEQ=1813137675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7D6B900000000001030307) Nov 23 04:33:26 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:33:26 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:26 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:27 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 23 04:33:27 localhost systemd[1]: var-lib-containers-storage-overlay-3a17eac87cb600b8517e7565ff433f093958b0f7f7912bbcd2309c83781f31a6-merged.mount: Deactivated successfully. Nov 23 04:33:27 localhost systemd[1]: var-lib-containers-storage-overlay-3a17eac87cb600b8517e7565ff433f093958b0f7f7912bbcd2309c83781f31a6-merged.mount: Deactivated successfully. Nov 23 04:33:28 localhost python3.9[243579]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Nov 23 04:33:28 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:33:28 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 23 04:33:28 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 23 04:33:28 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 23 04:33:29 localhost systemd[1]: libpod-conmon-7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.scope: Deactivated successfully. Nov 23 04:33:29 localhost systemd[1]: Started libpod-conmon-7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.scope. Nov 23 04:33:29 localhost podman[243580]: 2025-11-23 09:33:29.124204423 +0000 UTC m=+0.811229790 container exec 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 04:33:29 localhost podman[243580]: 2025-11-23 09:33:29.15847442 +0000 UTC m=+0.845499787 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible) Nov 23 04:33:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:33:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63271 DF PROTO=TCP SPT=40578 DPT=9100 SEQ=2004364646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7D7A1E0000000001030307) Nov 23 04:33:30 localhost sshd[243618]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:33:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63272 DF PROTO=TCP SPT=40578 DPT=9100 SEQ=2004364646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7D7E0F0000000001030307) Nov 23 04:33:31 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:33:31 localhost systemd[1]: var-lib-containers-storage-overlay-bd7c294b0b164ca1fe098d3dd445b687fad9bf3828ad00cdfc5c993d3ad1f67f-merged.mount: Deactivated successfully. Nov 23 04:33:31 localhost systemd[1]: var-lib-containers-storage-overlay-bd7c294b0b164ca1fe098d3dd445b687fad9bf3828ad00cdfc5c993d3ad1f67f-merged.mount: Deactivated successfully. Nov 23 04:33:31 localhost podman[243606]: 2025-11-23 09:33:31.922267938 +0000 UTC m=+2.456622424 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Nov 23 04:33:31 localhost podman[243606]: 2025-11-23 09:33:31.953327685 +0000 UTC m=+2.487682141 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:33:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:33:32 localhost python3.9[243739]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:33:33 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 23 04:33:33 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:33 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 23 04:33:33 localhost systemd[1]: libpod-conmon-7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.scope: Deactivated successfully. Nov 23 04:33:33 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:33 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:33 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:33:33 localhost podman[243738]: 2025-11-23 09:33:33.400313005 +0000 UTC m=+1.077365603 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 23 04:33:33 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 23 04:33:33 localhost podman[243738]: 2025-11-23 09:33:33.501841285 +0000 UTC m=+1.178893823 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible) Nov 23 04:33:33 localhost python3.9[243873]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman Nov 23 04:33:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7956 DF PROTO=TCP SPT=43684 DPT=9100 SEQ=268061537 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7D8A0F0000000001030307) Nov 23 04:33:34 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:33:34 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:34 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:36 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:33:36 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:33:36 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:33:36 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:33:36 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:36 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:36 localhost podman[243882]: 2025-11-23 09:33:36.636111579 +0000 UTC m=+2.073231655 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:33:36 localhost podman[243882]: 2025-11-23 09:33:36.654351243 +0000 UTC m=+2.091471369 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:33:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:33:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63274 DF PROTO=TCP SPT=40578 DPT=9100 SEQ=2004364646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7D95CF0000000001030307) Nov 23 04:33:37 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:33:37 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:37 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:38 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:38 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:33:39 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:33:39 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:33:39 localhost podman[243905]: 2025-11-23 09:33:39.153971548 +0000 UTC m=+2.470939275 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:33:39 localhost podman[243905]: 2025-11-23 09:33:39.203384992 +0000 UTC m=+2.520352769 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Nov 23 04:33:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:39 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:33:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:39 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:33:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3379 DF PROTO=TCP SPT=47242 DPT=9105 SEQ=2911576324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7D9FCF0000000001030307) Nov 23 04:33:40 localhost python3.9[244031]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Nov 23 04:33:40 localhost systemd[1]: Started libpod-conmon-2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.scope. Nov 23 04:33:40 localhost podman[244032]: 2025-11-23 09:33:40.385073573 +0000 UTC m=+0.093970504 container exec 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0) Nov 23 04:33:40 localhost podman[244032]: 2025-11-23 09:33:40.41839902 +0000 UTC m=+0.127295941 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 23 04:33:40 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:40 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:40 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:40 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 23 04:33:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:41 localhost systemd[1]: var-lib-containers-storage-overlay-01a78c4dbe79ec91dd17b016c5deea447203862889bb5c5908b4f002506791c6-merged.mount: Deactivated successfully. Nov 23 04:33:41 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:33:41 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:41 localhost python3.9[244171]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Nov 23 04:33:41 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:42 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:33:42 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 23 04:33:42 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 23 04:33:42 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:42 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:42 localhost systemd[1]: libpod-conmon-2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.scope: Deactivated successfully. Nov 23 04:33:42 localhost systemd[1]: Started libpod-conmon-2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.scope. Nov 23 04:33:42 localhost podman[244172]: 2025-11-23 09:33:42.919175482 +0000 UTC m=+1.107365966 container exec 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 23 04:33:42 localhost podman[244172]: 2025-11-23 09:33:42.953524212 +0000 UTC m=+1.141714746 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:33:43 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:33:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3380 DF PROTO=TCP SPT=47242 DPT=9105 SEQ=2911576324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7DAF900000000001030307) Nov 23 04:33:44 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:44 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:33:44 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:33:44 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:44 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:45 localhost python3.9[244309]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:33:45 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:45 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:45 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:45 localhost python3.9[244419]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman Nov 23 04:33:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40455 DF PROTO=TCP SPT=41766 DPT=9882 SEQ=646698431 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7DBC230000000001030307) Nov 23 04:33:47 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:33:47 localhost systemd[1]: var-lib-containers-storage-overlay-3ddb8c8a4bfd16d5d2b9440ebf9f9bbd0f4b443343fba1edb54846ce97cdacfb-merged.mount: Deactivated successfully. Nov 23 04:33:47 localhost systemd[1]: var-lib-containers-storage-overlay-3ddb8c8a4bfd16d5d2b9440ebf9f9bbd0f4b443343fba1edb54846ce97cdacfb-merged.mount: Deactivated successfully. Nov 23 04:33:47 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:47 localhost systemd[1]: libpod-conmon-2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.scope: Deactivated successfully. Nov 23 04:33:48 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:48 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 23 04:33:48 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 23 04:33:48 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62303 DF PROTO=TCP SPT=60488 DPT=9882 SEQ=157823087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7DC40F0000000001030307) Nov 23 04:33:49 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:49 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:33:49 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:33:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:33:49 localhost podman[244433]: 2025-11-23 09:33:49.415479158 +0000 UTC m=+0.115778980 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, version=9.6, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 04:33:49 localhost podman[244433]: 2025-11-23 09:33:49.43525354 +0000 UTC m=+0.135553342 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, release=1755695350, version=9.6, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Nov 23 04:33:50 localhost python3.9[244561]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Nov 23 04:33:50 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:33:50 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 23 04:33:51 localhost systemd[1]: var-lib-containers-storage-overlay-3a17eac87cb600b8517e7565ff433f093958b0f7f7912bbcd2309c83781f31a6-merged.mount: Deactivated successfully. Nov 23 04:33:51 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:33:51 localhost systemd[1]: Started libpod-conmon-8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.scope. Nov 23 04:33:51 localhost podman[244562]: 2025-11-23 09:33:51.227701288 +0000 UTC m=+1.194008919 container exec 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:33:51 localhost podman[244562]: 2025-11-23 09:33:51.261043176 +0000 UTC m=+1.227350817 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:33:52 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 23 04:33:52 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 23 04:33:52 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 23 04:33:52 localhost systemd[1]: var-lib-containers-storage-overlay-c5f6f274beb3204479b30adda5eae6174b870ceb64a77b37313304db216ec22a-merged.mount: Deactivated successfully. Nov 23 04:33:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3296 DF PROTO=TCP SPT=53694 DPT=9101 SEQ=558867950 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7DD00F0000000001030307) Nov 23 04:33:52 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 23 04:33:53 localhost python3.9[244703]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Nov 23 04:33:54 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:33:54 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Nov 23 04:33:54 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Nov 23 04:33:54 localhost systemd[1]: libpod-conmon-8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.scope: Deactivated successfully. Nov 23 04:33:54 localhost systemd[1]: Started libpod-conmon-8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.scope. Nov 23 04:33:54 localhost podman[244704]: 2025-11-23 09:33:54.412087457 +0000 UTC m=+1.153538728 container exec 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:33:54 localhost podman[244704]: 2025-11-23 09:33:54.441220872 +0000 UTC m=+1.182672053 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:33:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15469 DF PROTO=TCP SPT=36650 DPT=9101 SEQ=1103647165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7DDA0F0000000001030307) Nov 23 04:33:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:33:55 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 23 04:33:55 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:33:55 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:33:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:33:56 localhost podman[244732]: 2025-11-23 09:33:56.053931182 +0000 UTC m=+1.230452406 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:33:56 localhost podman[244732]: 2025-11-23 09:33:56.095932072 +0000 UTC m=+1.272453286 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:33:56 localhost podman[244732]: unhealthy Nov 23 04:33:56 localhost podman[244792]: 2025-11-23 09:33:56.116394545 +0000 UTC m=+0.250377710 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3) Nov 23 04:33:56 localhost podman[244792]: 2025-11-23 09:33:56.136376253 +0000 UTC m=+0.270359448 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Nov 23 04:33:56 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:33:56 localhost python3.9[244931]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:33:57 localhost python3.9[245041]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman Nov 23 04:33:58 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:33:58 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:33:58 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:33:58 localhost systemd[1]: libpod-conmon-8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.scope: Deactivated successfully. Nov 23 04:33:58 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:33:58 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Failed with result 'exit-code'. Nov 23 04:33:58 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:33:58 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:00 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31580 DF PROTO=TCP SPT=34708 DPT=9100 SEQ=742154051 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7DEF4E0000000001030307) Nov 23 04:34:00 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:00 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:34:00 localhost nova_compute[229707]: 2025-11-23 09:34:00.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:34:00 localhost nova_compute[229707]: 2025-11-23 09:34:00.947 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:34:00 localhost nova_compute[229707]: 2025-11-23 09:34:00.948 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:34:00 localhost nova_compute[229707]: 2025-11-23 09:34:00.974 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:34:00 localhost nova_compute[229707]: 2025-11-23 09:34:00.974 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:34:00 localhost nova_compute[229707]: 2025-11-23 09:34:00.974 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:34:00 localhost nova_compute[229707]: 2025-11-23 09:34:00.975 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:34:00 localhost nova_compute[229707]: 2025-11-23 09:34:00.975 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:34:01 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:34:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31581 DF PROTO=TCP SPT=34708 DPT=9100 SEQ=742154051 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7DF34F0000000001030307) Nov 23 04:34:01 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:01 localhost nova_compute[229707]: 2025-11-23 09:34:01.457 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:34:01 localhost nova_compute[229707]: 2025-11-23 09:34:01.672 229711 WARNING nova.virt.libvirt.driver [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:34:01 localhost nova_compute[229707]: 2025-11-23 09:34:01.673 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=13157MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:34:01 localhost nova_compute[229707]: 2025-11-23 09:34:01.673 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:34:01 localhost nova_compute[229707]: 2025-11-23 09:34:01.673 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:34:01 localhost nova_compute[229707]: 2025-11-23 09:34:01.723 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:34:01 localhost nova_compute[229707]: 2025-11-23 09:34:01.724 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:34:01 localhost nova_compute[229707]: 2025-11-23 09:34:01.739 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:34:02 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:02 localhost nova_compute[229707]: 2025-11-23 09:34:02.209 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:34:02 localhost nova_compute[229707]: 2025-11-23 09:34:02.217 229711 DEBUG nova.compute.provider_tree [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:34:02 localhost nova_compute[229707]: 2025-11-23 09:34:02.230 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:34:02 localhost nova_compute[229707]: 2025-11-23 09:34:02.233 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:34:02 localhost nova_compute[229707]: 2025-11-23 09:34:02.233 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:34:02 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:02 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:02 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:02 localhost podman[239764]: time="2025-11-23T09:34:02Z" level=error msg="Getting root fs size for \"6d4e406f786a71969aa6dba08d95244d156890d689e60a731aae0cb86832aadb\": unmounting layer c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6: replacing mount point \"/var/lib/containers/storage/overlay/c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6/merged\": device or resource busy" Nov 23 04:34:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:03 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:03 localhost python3.9[245248]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Nov 23 04:34:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:34:03 localhost systemd[1]: Started libpod-conmon-a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.scope. Nov 23 04:34:03 localhost podman[245258]: 2025-11-23 09:34:03.672730448 +0000 UTC m=+0.096392151 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118) Nov 23 04:34:03 localhost podman[245258]: 2025-11-23 09:34:03.67884338 +0000 UTC m=+0.102505063 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Nov 23 04:34:03 localhost podman[245249]: 2025-11-23 09:34:03.687250234 +0000 UTC m=+0.149076107 container exec a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:34:03 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:03 localhost podman[245249]: 2025-11-23 09:34:03.728347746 +0000 UTC m=+0.190173559 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:34:04 localhost nova_compute[229707]: 2025-11-23 09:34:04.232 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:34:04 localhost nova_compute[229707]: 2025-11-23 09:34:04.250 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:34:04 localhost nova_compute[229707]: 2025-11-23 09:34:04.250 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:34:04 localhost nova_compute[229707]: 2025-11-23 09:34:04.250 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:34:04 localhost nova_compute[229707]: 2025-11-23 09:34:04.264 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:34:04 localhost nova_compute[229707]: 2025-11-23 09:34:04.264 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:34:04 localhost nova_compute[229707]: 2025-11-23 09:34:04.264 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:34:04 localhost nova_compute[229707]: 2025-11-23 09:34:04.265 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:34:04 localhost nova_compute[229707]: 2025-11-23 09:34:04.265 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:34:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19273 DF PROTO=TCP SPT=60830 DPT=9100 SEQ=2161665102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7E000F0000000001030307) Nov 23 04:34:04 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Nov 23 04:34:04 localhost systemd[1]: var-lib-containers-storage-overlay-610a23939626ba33bbc4ff5fdde23dbb8b9d397a6884923c98e37f79be82869c-merged.mount: Deactivated successfully. Nov 23 04:34:04 localhost systemd[1]: var-lib-containers-storage-overlay-610a23939626ba33bbc4ff5fdde23dbb8b9d397a6884923c98e37f79be82869c-merged.mount: Deactivated successfully. Nov 23 04:34:04 localhost nova_compute[229707]: 2025-11-23 09:34:04.974 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:34:05 localhost nova_compute[229707]: 2025-11-23 09:34:05.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:34:06 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:34:06 localhost systemd[1]: var-lib-containers-storage-overlay-3ddb8c8a4bfd16d5d2b9440ebf9f9bbd0f4b443343fba1edb54846ce97cdacfb-merged.mount: Deactivated successfully. Nov 23 04:34:06 localhost systemd[1]: var-lib-containers-storage-overlay-3ddb8c8a4bfd16d5d2b9440ebf9f9bbd0f4b443343fba1edb54846ce97cdacfb-merged.mount: Deactivated successfully. Nov 23 04:34:06 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:34:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:34:07 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:34:07 localhost systemd[1]: var-lib-containers-storage-overlay-4f2d68ffc7b0d2ad7154f194ce01f6add8f68d1c87ebccb7dfe58b78cf788c91-merged.mount: Deactivated successfully. Nov 23 04:34:07 localhost systemd[1]: var-lib-containers-storage-overlay-4f2d68ffc7b0d2ad7154f194ce01f6add8f68d1c87ebccb7dfe58b78cf788c91-merged.mount: Deactivated successfully. Nov 23 04:34:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31583 DF PROTO=TCP SPT=34708 DPT=9100 SEQ=742154051 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7E0B100000000001030307) Nov 23 04:34:07 localhost python3.9[245416]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Nov 23 04:34:07 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:07 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 23 04:34:07 localhost systemd[1]: libpod-conmon-a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.scope: Deactivated successfully. Nov 23 04:34:07 localhost systemd[1]: Started libpod-conmon-a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.scope. Nov 23 04:34:07 localhost podman[245417]: 2025-11-23 09:34:07.552976858 +0000 UTC m=+0.243292188 container exec a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:34:07 localhost podman[245351]: 2025-11-23 09:34:07.558315135 +0000 UTC m=+0.731915725 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, config_id=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller) Nov 23 04:34:07 localhost podman[245417]: 2025-11-23 09:34:07.585459349 +0000 UTC m=+0.275774689 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:34:07 localhost podman[245351]: 2025-11-23 09:34:07.605839879 +0000 UTC m=+0.779440389 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:34:08 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:08 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:34:08 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully. Nov 23 04:34:08 localhost systemd[1]: libpod-conmon-a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.scope: Deactivated successfully. Nov 23 04:34:09 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:34:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:34:09 localhost systemd[1]: var-lib-containers-storage-overlay-c5f6f274beb3204479b30adda5eae6174b870ceb64a77b37313304db216ec22a-merged.mount: Deactivated successfully. Nov 23 04:34:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:34:09.714 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:34:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:34:09.715 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:34:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:34:09.716 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:34:09 localhost systemd[1]: var-lib-containers-storage-overlay-c5f6f274beb3204479b30adda5eae6174b870ceb64a77b37313304db216ec22a-merged.mount: Deactivated successfully. Nov 23 04:34:09 localhost podman[245570]: 2025-11-23 09:34:09.774252884 +0000 UTC m=+0.104860146 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:34:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53235 DF PROTO=TCP SPT=44708 DPT=9105 SEQ=647415214 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7E14CF0000000001030307) Nov 23 04:34:09 localhost podman[245570]: 2025-11-23 09:34:09.81293465 +0000 UTC m=+0.143541862 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Nov 23 04:34:09 localhost podman[245571]: 2025-11-23 09:34:09.815857172 +0000 UTC m=+0.140303270 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:34:09 localhost python3.9[245569]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:34:09 localhost podman[245571]: 2025-11-23 09:34:09.898524231 +0000 UTC m=+0.222970319 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:34:10 localhost python3.9[245716]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman Nov 23 04:34:10 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 23 04:34:10 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 23 04:34:10 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 23 04:34:11 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:34:11 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Nov 23 04:34:11 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:34:11 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:34:13 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:13 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:34:13 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 23 04:34:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53236 DF PROTO=TCP SPT=44708 DPT=9105 SEQ=647415214 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7E248F0000000001030307) Nov 23 04:34:14 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:14 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:14 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 23 04:34:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:15 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:15 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:15 localhost podman[239764]: time="2025-11-23T09:34:15Z" level=error msg="Getting root fs size for \"6d4e406f786a71969aa6dba08d95244d156890d689e60a731aae0cb86832aadb\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf: replacing mount point \"/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged\": device or resource busy" Nov 23 04:34:15 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:15 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Nov 23 04:34:15 localhost python3.9[245841]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Nov 23 04:34:15 localhost systemd[1]: Started libpod-conmon-0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.scope. Nov 23 04:34:15 localhost podman[245842]: 2025-11-23 09:34:15.684379516 +0000 UTC m=+0.107285253 container exec 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-type=git, io.openshift.expose-services=, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, architecture=x86_64, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container) Nov 23 04:34:15 localhost podman[245842]: 2025-11-23 09:34:15.714445691 +0000 UTC m=+0.137351418 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc.) Nov 23 04:34:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31584 DF PROTO=TCP SPT=34708 DPT=9100 SEQ=742154051 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7E2C0F0000000001030307) Nov 23 04:34:16 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:16 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:16 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:17 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Nov 23 04:34:17 localhost systemd[1]: var-lib-containers-storage-overlay-610a23939626ba33bbc4ff5fdde23dbb8b9d397a6884923c98e37f79be82869c-merged.mount: Deactivated successfully. Nov 23 04:34:17 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:17 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:17 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:34:17 localhost systemd[1]: libpod-conmon-0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.scope: Deactivated successfully. Nov 23 04:34:17 localhost python3.9[245982]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Nov 23 04:34:17 localhost systemd[1]: Started libpod-conmon-0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.scope. Nov 23 04:34:17 localhost podman[245983]: 2025-11-23 09:34:17.799118645 +0000 UTC m=+0.109482962 container exec 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 04:34:17 localhost podman[245983]: 2025-11-23 09:34:17.832652169 +0000 UTC m=+0.143016476 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc.) Nov 23 04:34:18 localhost systemd[1]: var-lib-containers-storage-overlay-4f2d68ffc7b0d2ad7154f194ce01f6add8f68d1c87ebccb7dfe58b78cf788c91-merged.mount: Deactivated successfully. Nov 23 04:34:18 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Nov 23 04:34:18 localhost systemd[1]: var-lib-containers-storage-overlay-8562d6b9d1c21090ac473dd301ec4770b1bd3ad140de381119a1325c7a15d2fa-merged.mount: Deactivated successfully. Nov 23 04:34:18 localhost systemd[1]: var-lib-containers-storage-overlay-8562d6b9d1c21090ac473dd301ec4770b1bd3ad140de381119a1325c7a15d2fa-merged.mount: Deactivated successfully. Nov 23 04:34:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40460 DF PROTO=TCP SPT=41766 DPT=9882 SEQ=646698431 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7E38100000000001030307) Nov 23 04:34:19 localhost python3.9[246122]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:34:19 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:19 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 23 04:34:19 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 23 04:34:19 localhost systemd[1]: libpod-conmon-0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.scope: Deactivated successfully. Nov 23 04:34:20 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:20 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:21 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 23 04:34:21 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 23 04:34:21 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 23 04:34:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:34:21 localhost podman[246140]: 2025-11-23 09:34:21.342818677 +0000 UTC m=+0.132212177 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, version=9.6, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64) Nov 23 04:34:21 localhost podman[246140]: 2025-11-23 09:34:21.364425206 +0000 UTC m=+0.153818776 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, distribution-scope=public, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal) Nov 23 04:34:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53237 DF PROTO=TCP SPT=44708 DPT=9105 SEQ=647415214 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7E440F0000000001030307) Nov 23 04:34:22 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 23 04:34:22 localhost systemd[1]: var-lib-containers-storage-overlay-89da295cc978178bcac81850ec3c33a266a0d96aba327524b68608394214d41a-merged.mount: Deactivated successfully. Nov 23 04:34:22 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 23 04:34:22 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:34:22 localhost systemd[1]: var-lib-containers-storage-overlay-0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8-merged.mount: Deactivated successfully. Nov 23 04:34:22 localhost systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully. Nov 23 04:34:23 localhost systemd[1]: var-lib-containers-storage-overlay-0ff11ed3154c8bbd91096301c9cfc5b95bbe726d99c5650ba8d355053fb0bbad-merged.mount: Deactivated successfully. Nov 23 04:34:23 localhost systemd[1]: var-lib-containers-storage-overlay-d16160b7dcc2f7ec400dce38b825ab93d5279c0ca0a9a7ff351e435b4aeeea92-merged.mount: Deactivated successfully. Nov 23 04:34:24 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Nov 23 04:34:24 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:24 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:25 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:25 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully. Nov 23 04:34:25 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:34:25 localhost systemd[1]: var-lib-containers-storage-overlay-d16160b7dcc2f7ec400dce38b825ab93d5279c0ca0a9a7ff351e435b4aeeea92-merged.mount: Deactivated successfully. Nov 23 04:34:25 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Nov 23 04:34:26 localhost systemd[1]: var-lib-containers-storage-overlay-8562d6b9d1c21090ac473dd301ec4770b1bd3ad140de381119a1325c7a15d2fa-merged.mount: Deactivated successfully. Nov 23 04:34:26 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:26 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 23 04:34:26 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 23 04:34:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51019 DF PROTO=TCP SPT=44696 DPT=9102 SEQ=1933411279 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7E560F0000000001030307) Nov 23 04:34:27 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:27 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:28 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Nov 23 04:34:28 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 23 04:34:28 localhost systemd[1]: var-lib-containers-storage-overlay-89da295cc978178bcac81850ec3c33a266a0d96aba327524b68608394214d41a-merged.mount: Deactivated successfully. Nov 23 04:34:28 localhost systemd[1]: var-lib-containers-storage-overlay-89da295cc978178bcac81850ec3c33a266a0d96aba327524b68608394214d41a-merged.mount: Deactivated successfully. Nov 23 04:34:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:34:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:34:29 localhost systemd[1]: tmp-crun.zbpeDk.mount: Deactivated successfully. Nov 23 04:34:29 localhost systemd[1]: var-lib-containers-storage-overlay-0ff11ed3154c8bbd91096301c9cfc5b95bbe726d99c5650ba8d355053fb0bbad-merged.mount: Deactivated successfully. Nov 23 04:34:29 localhost podman[246158]: 2025-11-23 09:34:29.168677654 +0000 UTC m=+0.113540672 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 04:34:29 localhost podman[246158]: 2025-11-23 09:34:29.176746254 +0000 UTC m=+0.121609232 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true) Nov 23 04:34:29 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:34:29 localhost podman[246159]: 2025-11-23 09:34:29.934606872 +0000 UTC m=+0.872475424 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:34:29 localhost podman[246159]: 2025-11-23 09:34:29.946293253 +0000 UTC m=+0.884161785 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:34:29 localhost podman[246159]: unhealthy Nov 23 04:34:30 localhost systemd[1]: var-lib-containers-storage-overlay-d16160b7dcc2f7ec400dce38b825ab93d5279c0ca0a9a7ff351e435b4aeeea92-merged.mount: Deactivated successfully. Nov 23 04:34:30 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 23 04:34:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41252 DF PROTO=TCP SPT=46366 DPT=9100 SEQ=1576499255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7E647D0000000001030307) Nov 23 04:34:30 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:34:30 localhost systemd[1]: var-lib-containers-storage-overlay-0ff11ed3154c8bbd91096301c9cfc5b95bbe726d99c5650ba8d355053fb0bbad-merged.mount: Deactivated successfully. Nov 23 04:34:30 localhost systemd[1]: var-lib-containers-storage-overlay-0ff11ed3154c8bbd91096301c9cfc5b95bbe726d99c5650ba8d355053fb0bbad-merged.mount: Deactivated successfully. Nov 23 04:34:31 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:31 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:31 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:34:31 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Failed with result 'exit-code'. Nov 23 04:34:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41253 DF PROTO=TCP SPT=46366 DPT=9100 SEQ=1576499255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7E688F0000000001030307) Nov 23 04:34:31 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully. Nov 23 04:34:32 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:32 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:34:32 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:32 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully. Nov 23 04:34:33 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully. Nov 23 04:34:33 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63277 DF PROTO=TCP SPT=40578 DPT=9100 SEQ=2004364646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7E740F0000000001030307) Nov 23 04:34:34 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Nov 23 04:34:34 localhost systemd[1]: var-lib-containers-storage-overlay-30826345d318533efcd6d35f8914ab0003e05b6751a7a5bc7b1bbeb3898fc84c-merged.mount: Deactivated successfully. Nov 23 04:34:34 localhost systemd[1]: var-lib-containers-storage-overlay-30826345d318533efcd6d35f8914ab0003e05b6751a7a5bc7b1bbeb3898fc84c-merged.mount: Deactivated successfully. Nov 23 04:34:36 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:36 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 23 04:34:36 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 23 04:34:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:34:36 localhost podman[246200]: 2025-11-23 09:34:36.937148585 +0000 UTC m=+0.122465199 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118) Nov 23 04:34:36 localhost podman[246200]: 2025-11-23 09:34:36.972371844 +0000 UTC m=+0.157688428 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:34:37 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:37 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 23 04:34:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41255 DF PROTO=TCP SPT=46366 DPT=9100 SEQ=1576499255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7E804F0000000001030307) Nov 23 04:34:37 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:37 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:37 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:37 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:34:38 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Nov 23 04:34:38 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:38 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:34:38 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:38 localhost podman[246216]: 2025-11-23 09:34:38.405091753 +0000 UTC m=+0.092992008 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 04:34:38 localhost podman[246216]: 2025-11-23 09:34:38.482374132 +0000 UTC m=+0.170274417 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:34:39 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:39 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 23 04:34:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2936 DF PROTO=TCP SPT=52514 DPT=9105 SEQ=3425578564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7E8A0F0000000001030307) Nov 23 04:34:39 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 23 04:34:39 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:34:40 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:40 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:41 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 23 04:34:41 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:41 localhost systemd[1]: var-lib-containers-storage-overlay-575692a885e4e5d5a8b1e76315957cc96af13a896db846450cad3752e5067ba2-merged.mount: Deactivated successfully. Nov 23 04:34:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:34:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:34:41 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:41 localhost podman[246241]: 2025-11-23 09:34:41.887383717 +0000 UTC m=+0.071339148 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 04:34:41 localhost podman[246241]: 2025-11-23 09:34:41.985336686 +0000 UTC m=+0.169292127 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:34:42 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:42 localhost podman[246242]: 2025-11-23 09:34:42.072928725 +0000 UTC m=+0.248403444 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:34:42 localhost podman[246242]: 2025-11-23 09:34:42.082127939 +0000 UTC m=+0.257602698 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:34:42 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Nov 23 04:34:42 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:42 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:42 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:42 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:42 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:34:42 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:34:43 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:43 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:43 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:43 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2937 DF PROTO=TCP SPT=52514 DPT=9105 SEQ=3425578564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7E99CF0000000001030307) Nov 23 04:34:44 localhost systemd[1]: var-lib-containers-storage-overlay-91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580-merged.mount: Deactivated successfully. Nov 23 04:34:45 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Nov 23 04:34:45 localhost systemd[1]: var-lib-containers-storage-overlay-30826345d318533efcd6d35f8914ab0003e05b6751a7a5bc7b1bbeb3898fc84c-merged.mount: Deactivated successfully. Nov 23 04:34:46 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:46 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 23 04:34:46 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 23 04:34:47 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40597 DF PROTO=TCP SPT=33720 DPT=9882 SEQ=1176648548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7EA6820000000001030307) Nov 23 04:34:47 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:47 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:34:47 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:48 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:48 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:48 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:48 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:48 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36177 DF PROTO=TCP SPT=58884 DPT=9882 SEQ=1809897589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7EAE0F0000000001030307) Nov 23 04:34:49 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:49 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:34:49 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 23 04:34:50 localhost systemd[1]: var-lib-containers-storage-overlay-575692a885e4e5d5a8b1e76315957cc96af13a896db846450cad3752e5067ba2-merged.mount: Deactivated successfully. Nov 23 04:34:50 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:50 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:50 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:51 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Nov 23 04:34:51 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:51 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:51 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:51 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8539 DF PROTO=TCP SPT=34420 DPT=9101 SEQ=680820325 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7EBA0F0000000001030307) Nov 23 04:34:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:34:52 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:34:52 localhost podman[246280]: 2025-11-23 09:34:52.395850503 +0000 UTC m=+0.082967557 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, architecture=x86_64, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible) Nov 23 04:34:52 localhost podman[246280]: 2025-11-23 09:34:52.407323568 +0000 UTC m=+0.094440602 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9) Nov 23 04:34:52 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Nov 23 04:34:52 localhost systemd[1]: var-lib-containers-storage-overlay-91cba6566408e8e816c2e398c8ba937732ecb3c252ba046499b3183a845ca580-merged.mount: Deactivated successfully. Nov 23 04:34:53 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:34:53 localhost systemd[1]: var-lib-containers-storage-overlay-622ea5b2f6fbe5d9b292df85d50e445712f85ed6230930160a21086a3d12c064-merged.mount: Deactivated successfully. Nov 23 04:34:54 localhost systemd[1]: var-lib-containers-storage-overlay-622ea5b2f6fbe5d9b292df85d50e445712f85ed6230930160a21086a3d12c064-merged.mount: Deactivated successfully. Nov 23 04:34:54 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:34:56 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:34:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23452 DF PROTO=TCP SPT=47506 DPT=9102 SEQ=1441159096 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7ECB0F0000000001030307) Nov 23 04:34:56 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:34:56 localhost podman[239764]: time="2025-11-23T09:34:56Z" level=error msg="Getting root fs size for \"c084c2862fdcbc6ac7e04cf6ae3b8a2b21de26f7fa8236ac40013692933e6e30\": getting diffsize of layer \"e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28\" and its parent \"f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958\": unmounting layer f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958: replacing mount point \"/var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/merged\": device or resource busy" Nov 23 04:34:56 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:56 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:58 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:58 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:34:58 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:34:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:34:59 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:34:59 localhost systemd[1]: var-lib-containers-storage-overlay-11e303e2a487b3de65e20e02c06253184ba4537ed64f53b2bdbdf3a08756ea60-merged.mount: Deactivated successfully. Nov 23 04:34:59 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:34:59 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:59 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:34:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:35:00 localhost podman[246335]: 2025-11-23 09:35:00.045686962 +0000 UTC m=+0.078743866 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:35:00 localhost podman[246335]: 2025-11-23 09:35:00.076483805 +0000 UTC m=+0.109540719 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm) Nov 23 04:35:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60754 DF PROTO=TCP SPT=52562 DPT=9100 SEQ=971688124 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7ED9AE0000000001030307) Nov 23 04:35:00 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:35:00 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:00 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:00 localhost nova_compute[229707]: 2025-11-23 09:35:00.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:35:00 localhost nova_compute[229707]: 2025-11-23 09:35:00.947 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:35:00 localhost nova_compute[229707]: 2025-11-23 09:35:00.947 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Nov 23 04:35:00 localhost nova_compute[229707]: 2025-11-23 09:35:00.965 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Nov 23 04:35:00 localhost nova_compute[229707]: 2025-11-23 09:35:00.966 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:35:00 localhost nova_compute[229707]: 2025-11-23 09:35:00.966 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Nov 23 04:35:00 localhost nova_compute[229707]: 2025-11-23 09:35:00.979 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:35:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60755 DF PROTO=TCP SPT=52562 DPT=9100 SEQ=971688124 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7EDDCF0000000001030307) Nov 23 04:35:01 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:35:01 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:35:01 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:35:01 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:35:01 localhost podman[246368]: 2025-11-23 09:35:01.695832925 +0000 UTC m=+0.320204813 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:35:01 localhost podman[246368]: 2025-11-23 09:35:01.728399413 +0000 UTC m=+0.352771271 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:35:01 localhost podman[246368]: unhealthy Nov 23 04:35:01 localhost nova_compute[229707]: 2025-11-23 09:35:01.995 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:35:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:02 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:35:02 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Failed with result 'exit-code'. Nov 23 04:35:02 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:35:02 localhost nova_compute[229707]: 2025-11-23 09:35:02.945 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:35:02 localhost nova_compute[229707]: 2025-11-23 09:35:02.958 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:35:02 localhost nova_compute[229707]: 2025-11-23 09:35:02.958 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:35:02 localhost nova_compute[229707]: 2025-11-23 09:35:02.958 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:35:02 localhost nova_compute[229707]: 2025-11-23 09:35:02.958 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:35:02 localhost nova_compute[229707]: 2025-11-23 09:35:02.959 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:35:03 localhost nova_compute[229707]: 2025-11-23 09:35:03.383 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:35:03 localhost nova_compute[229707]: 2025-11-23 09:35:03.535 229711 WARNING nova.virt.libvirt.driver [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:35:03 localhost nova_compute[229707]: 2025-11-23 09:35:03.537 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=13073MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:35:03 localhost nova_compute[229707]: 2025-11-23 09:35:03.538 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:35:03 localhost nova_compute[229707]: 2025-11-23 09:35:03.538 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:35:03 localhost nova_compute[229707]: 2025-11-23 09:35:03.653 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:35:03 localhost nova_compute[229707]: 2025-11-23 09:35:03.654 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:35:03 localhost nova_compute[229707]: 2025-11-23 09:35:03.705 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Refreshing inventories for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 23 04:35:03 localhost nova_compute[229707]: 2025-11-23 09:35:03.750 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Updating ProviderTree inventory for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 23 04:35:03 localhost nova_compute[229707]: 2025-11-23 09:35:03.751 229711 DEBUG nova.compute.provider_tree [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Updating inventory in ProviderTree for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 23 04:35:03 localhost nova_compute[229707]: 2025-11-23 09:35:03.764 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Refreshing aggregate associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 23 04:35:03 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:03 localhost nova_compute[229707]: 2025-11-23 09:35:03.787 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Refreshing trait associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, traits: HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,HW_CPU_X86_AESNI,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 23 04:35:03 localhost nova_compute[229707]: 2025-11-23 09:35:03.801 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:35:03 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:04 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:04 localhost nova_compute[229707]: 2025-11-23 09:35:04.265 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:35:04 localhost nova_compute[229707]: 2025-11-23 09:35:04.271 229711 DEBUG nova.compute.provider_tree [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:35:04 localhost nova_compute[229707]: 2025-11-23 09:35:04.289 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:35:04 localhost nova_compute[229707]: 2025-11-23 09:35:04.291 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:35:04 localhost nova_compute[229707]: 2025-11-23 09:35:04.292 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:35:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31586 DF PROTO=TCP SPT=34708 DPT=9100 SEQ=742154051 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7EEA0F0000000001030307) Nov 23 04:35:04 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:04 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:35:04 localhost systemd[1]: var-lib-containers-storage-overlay-622ea5b2f6fbe5d9b292df85d50e445712f85ed6230930160a21086a3d12c064-merged.mount: Deactivated successfully. Nov 23 04:35:04 localhost systemd[1]: var-lib-containers-storage-overlay-1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed-merged.mount: Deactivated successfully. Nov 23 04:35:04 localhost systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully. Nov 23 04:35:04 localhost systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully. Nov 23 04:35:04 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:04 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:05 localhost nova_compute[229707]: 2025-11-23 09:35:05.293 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:35:05 localhost nova_compute[229707]: 2025-11-23 09:35:05.294 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:35:05 localhost nova_compute[229707]: 2025-11-23 09:35:05.294 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:35:05 localhost nova_compute[229707]: 2025-11-23 09:35:05.294 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:35:05 localhost nova_compute[229707]: 2025-11-23 09:35:05.295 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:35:05 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:05 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:35:05 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:05 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:05 localhost nova_compute[229707]: 2025-11-23 09:35:05.947 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:35:05 localhost nova_compute[229707]: 2025-11-23 09:35:05.948 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:35:05 localhost nova_compute[229707]: 2025-11-23 09:35:05.948 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:35:05 localhost nova_compute[229707]: 2025-11-23 09:35:05.960 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:35:07 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:07 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:35:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60757 DF PROTO=TCP SPT=52562 DPT=9100 SEQ=971688124 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7EF58F0000000001030307) Nov 23 04:35:07 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:35:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:35:07 localhost podman[246471]: 2025-11-23 09:35:07.906086785 +0000 UTC m=+0.083348338 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Nov 23 04:35:07 localhost nova_compute[229707]: 2025-11-23 09:35:07.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:35:07 localhost podman[246471]: 2025-11-23 09:35:07.948667402 +0000 UTC m=+0.125928945 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Nov 23 04:35:08 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:35:08 localhost systemd[1]: var-lib-containers-storage-overlay-ff58612c818b92d281d7c503b4ab2e3ed913b3ad617558ac5d656b0c174fca90-merged.mount: Deactivated successfully. Nov 23 04:35:08 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:09 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:09 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:09 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:35:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:35:09.715 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:35:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:35:09.716 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:35:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:35:09.716 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:35:09 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46804 DF PROTO=TCP SPT=55412 DPT=9105 SEQ=2860254580 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7EFF4F0000000001030307) Nov 23 04:35:10 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 23 04:35:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:35:10 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 23 04:35:10 localhost podman[246488]: 2025-11-23 09:35:10.409935809 +0000 UTC m=+0.083913796 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller) Nov 23 04:35:10 localhost podman[246488]: 2025-11-23 09:35:10.43518231 +0000 UTC m=+0.109160317 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:35:10 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:10 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:35:10 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:11 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:11 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:11 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:35:11 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:11 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:11 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:11 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:12 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:35:12 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 23 04:35:12 localhost systemd[1]: var-lib-containers-storage-overlay-1c555bb6d05f3d1ef69b807da8d7b417226dccb2e4af3d5892e31108d455684e-merged.mount: Deactivated successfully. Nov 23 04:35:12 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:35:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.571 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.572 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.572 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.572 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.572 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.572 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:35:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:35:12 localhost podman[246512]: 2025-11-23 09:35:12.65819035 +0000 UTC m=+0.093529174 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd) Nov 23 04:35:12 localhost podman[246512]: 2025-11-23 09:35:12.668221619 +0000 UTC m=+0.103560413 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 04:35:12 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:35:12 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 23 04:35:12 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 23 04:35:13 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 23 04:35:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46805 DF PROTO=TCP SPT=55412 DPT=9105 SEQ=2860254580 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7F0F0F0000000001030307) Nov 23 04:35:14 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:35:14 localhost systemd[1]: var-lib-containers-storage-overlay-11e303e2a487b3de65e20e02c06253184ba4537ed64f53b2bdbdf3a08756ea60-merged.mount: Deactivated successfully. Nov 23 04:35:14 localhost systemd[1]: var-lib-containers-storage-overlay-11e303e2a487b3de65e20e02c06253184ba4537ed64f53b2bdbdf3a08756ea60-merged.mount: Deactivated successfully. Nov 23 04:35:14 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:35:14 localhost podman[246513]: 2025-11-23 09:35:14.828619052 +0000 UTC m=+2.258842058 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:35:14 localhost podman[246513]: 2025-11-23 09:35:14.839632253 +0000 UTC m=+2.269855289 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:35:15 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 23 04:35:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42540 DF PROTO=TCP SPT=49812 DPT=9882 SEQ=2500475186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7F1BB30000000001030307) Nov 23 04:35:17 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:17 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:35:17 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:35:17 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:35:18 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 23 04:35:18 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:19 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20393 DF PROTO=TCP SPT=44340 DPT=9102 SEQ=1154911260 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7F24940000000001030307) Nov 23 04:35:20 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:20 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:20 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:21 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:21 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:21 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:21 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:21 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:35:21 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:35:21 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:35:21 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:21 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65126 DF PROTO=TCP SPT=52050 DPT=9101 SEQ=448007700 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7F300F0000000001030307) Nov 23 04:35:23 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:23 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:23 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:24 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:35:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:35:24 localhost systemd[1]: var-lib-containers-storage-overlay-ff58612c818b92d281d7c503b4ab2e3ed913b3ad617558ac5d656b0c174fca90-merged.mount: Deactivated successfully. Nov 23 04:35:24 localhost podman[246550]: 2025-11-23 09:35:24.173169423 +0000 UTC m=+0.082324216 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible) Nov 23 04:35:24 localhost podman[246550]: 2025-11-23 09:35:24.188727245 +0000 UTC m=+0.097882078 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, version=9.6, config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350) Nov 23 04:35:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:24 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:24 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:24 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:35:25 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:35:25 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:25 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 23 04:35:25 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:35:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:26 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 23 04:35:26 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:35:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20396 DF PROTO=TCP SPT=44340 DPT=9102 SEQ=1154911260 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7F404F0000000001030307) Nov 23 04:35:26 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:35:27 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 23 04:35:27 localhost systemd[1]: var-lib-containers-storage-overlay-1c555bb6d05f3d1ef69b807da8d7b417226dccb2e4af3d5892e31108d455684e-merged.mount: Deactivated successfully. Nov 23 04:35:27 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:35:27 localhost systemd[1]: var-lib-containers-storage-overlay-bc7298092e843edd1007408afda0c49e162b58c616e74027b863211eb586108a-merged.mount: Deactivated successfully. Nov 23 04:35:27 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 23 04:35:27 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 23 04:35:28 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 23 04:35:28 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 23 04:35:29 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 23 04:35:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37441 DF PROTO=TCP SPT=53618 DPT=9100 SEQ=2904292758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7F4EDE0000000001030307) Nov 23 04:35:30 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully. Nov 23 04:35:30 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully. Nov 23 04:35:30 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully. Nov 23 04:35:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37442 DF PROTO=TCP SPT=53618 DPT=9100 SEQ=2904292758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7F52CF0000000001030307) Nov 23 04:35:31 localhost sshd[246569]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:35:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:35:31 localhost podman[246571]: 2025-11-23 09:35:31.913045268 +0000 UTC m=+0.102844662 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Nov 23 04:35:31 localhost podman[246571]: 2025-11-23 09:35:31.954399748 +0000 UTC m=+0.144199122 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Nov 23 04:35:31 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:35:32 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully. Nov 23 04:35:32 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:35:32 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:35:32 localhost podman[246589]: 2025-11-23 09:35:32.389598506 +0000 UTC m=+0.255367198 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:35:32 localhost podman[246589]: 2025-11-23 09:35:32.394653793 +0000 UTC m=+0.260422435 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:35:32 localhost podman[246589]: unhealthy Nov 23 04:35:32 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully. Nov 23 04:35:32 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:35:33 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:35:33 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:35:33 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:35:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41258 DF PROTO=TCP SPT=46366 DPT=9100 SEQ=1576499255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7F5E0F0000000001030307) Nov 23 04:35:34 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:34 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:34 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:34 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:35:34 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Failed with result 'exit-code'. Nov 23 04:35:35 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:35:35 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully. Nov 23 04:35:35 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully. Nov 23 04:35:36 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:36 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:35:36 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:36 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37444 DF PROTO=TCP SPT=53618 DPT=9100 SEQ=2904292758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7F6A8F0000000001030307) Nov 23 04:35:37 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully. Nov 23 04:35:37 localhost systemd[1]: var-lib-containers-storage-overlay-67ad61ffa011d2a6ffe840fd10b604169240085fe7d81fe96db01b6b54a383bd-merged.mount: Deactivated successfully. Nov 23 04:35:37 localhost systemd[1]: var-lib-containers-storage-overlay-67ad61ffa011d2a6ffe840fd10b604169240085fe7d81fe96db01b6b54a383bd-merged.mount: Deactivated successfully. Nov 23 04:35:39 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 23 04:35:39 localhost systemd[1]: session-55.scope: Deactivated successfully. Nov 23 04:35:39 localhost systemd[1]: session-55.scope: Consumed 1min 15.083s CPU time. Nov 23 04:35:39 localhost systemd-logind[760]: Session 55 logged out. Waiting for processes to exit. Nov 23 04:35:39 localhost systemd-logind[760]: Removed session 55. Nov 23 04:35:39 localhost systemd[1]: var-lib-containers-storage-overlay-bc7298092e843edd1007408afda0c49e162b58c616e74027b863211eb586108a-merged.mount: Deactivated successfully. Nov 23 04:35:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:35:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35972 DF PROTO=TCP SPT=60194 DPT=9105 SEQ=4165317657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7F748F0000000001030307) Nov 23 04:35:39 localhost podman[246610]: 2025-11-23 09:35:39.896017233 +0000 UTC m=+0.082441207 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:35:39 localhost podman[246610]: 2025-11-23 09:35:39.901027187 +0000 UTC m=+0.087451101 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118) Nov 23 04:35:40 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:40 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:35:41 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully. Nov 23 04:35:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:35:41 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully. Nov 23 04:35:41 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully. Nov 23 04:35:41 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:35:41 localhost podman[246626]: 2025-11-23 09:35:41.761952671 +0000 UTC m=+0.277281664 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 23 04:35:41 localhost podman[246626]: 2025-11-23 09:35:41.847900014 +0000 UTC m=+0.363229077 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Nov 23 04:35:43 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:35:43 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully. Nov 23 04:35:43 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully. Nov 23 04:35:43 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:43 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:43 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:43 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:35:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35973 DF PROTO=TCP SPT=60194 DPT=9105 SEQ=4165317657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7F844F0000000001030307) Nov 23 04:35:44 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:35:44 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:35:44 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:35:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:35:45 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully. Nov 23 04:35:45 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:35:45 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:45 localhost podman[246653]: 2025-11-23 09:35:45.651311889 +0000 UTC m=+0.087304317 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3) Nov 23 04:35:45 localhost podman[246653]: 2025-11-23 09:35:45.658023796 +0000 UTC m=+0.094016234 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd) Nov 23 04:35:45 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:35:45 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:35:46 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully. Nov 23 04:35:46 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52617 DF PROTO=TCP SPT=52120 DPT=9882 SEQ=1459882727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7F90E30000000001030307) Nov 23 04:35:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:35:47 localhost podman[246672]: 2025-11-23 09:35:47.931338383 +0000 UTC m=+0.113406713 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:35:47 localhost podman[246672]: 2025-11-23 09:35:47.960454072 +0000 UTC m=+0.142522362 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:35:48 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully. Nov 23 04:35:48 localhost systemd[1]: var-lib-containers-storage-overlay-67ad61ffa011d2a6ffe840fd10b604169240085fe7d81fe96db01b6b54a383bd-merged.mount: Deactivated successfully. Nov 23 04:35:48 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:48 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:48 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:35:48 localhost systemd[1]: var-lib-containers-storage-overlay-152cc360f83de3db72d1c9916a54c9e86bf0639bd8dcb2e8e7396d7b8e52034e-merged.mount: Deactivated successfully. Nov 23 04:35:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42545 DF PROTO=TCP SPT=49812 DPT=9882 SEQ=2500475186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7F980F0000000001030307) Nov 23 04:35:51 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:51 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:35:51 localhost podman[239764]: time="2025-11-23T09:35:51Z" level=error msg="Getting root fs size for \"de86cae9b19410386ddac3de8333d9c9db5dce3c16a24aadaa0caa6077d6d840\": getting diffsize of layer \"3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34\" and its parent \"f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958\": unmounting layer 3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34: replacing mount point \"/var/lib/containers/storage/overlay/3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34/merged\": device or resource busy" Nov 23 04:35:51 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35974 DF PROTO=TCP SPT=60194 DPT=9105 SEQ=4165317657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7FA40F0000000001030307) Nov 23 04:35:53 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:53 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:53 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:54 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:35:54 localhost systemd[1]: var-lib-containers-storage-overlay-d9ec08b5fa040dfb6133696a337502d9c613a89061aa958f367079d48924e617-merged.mount: Deactivated successfully. Nov 23 04:35:54 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:54 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:54 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:35:54 localhost podman[246693]: 2025-11-23 09:35:54.655122625 +0000 UTC m=+0.092446375 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=) Nov 23 04:35:54 localhost podman[246693]: 2025-11-23 09:35:54.665769434 +0000 UTC m=+0.103093244 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350) Nov 23 04:35:54 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:54 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:54 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:56 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49283 DF PROTO=TCP SPT=48646 DPT=9102 SEQ=558947369 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7FB5900000000001030307) Nov 23 04:35:56 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:35:56 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:35:56 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:35:56 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:56 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:57 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:57 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:57 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:35:57 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:57 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:35:58 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:58 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:58 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:35:59 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:35:59 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:35:59 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13206 DF PROTO=TCP SPT=53180 DPT=9100 SEQ=414079835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7FC40E0000000001030307) Nov 23 04:36:00 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:36:00 localhost systemd[1]: var-lib-containers-storage-overlay-152cc360f83de3db72d1c9916a54c9e86bf0639bd8dcb2e8e7396d7b8e52034e-merged.mount: Deactivated successfully. Nov 23 04:36:00 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:00 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:00 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:01 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13207 DF PROTO=TCP SPT=53180 DPT=9100 SEQ=414079835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7FC80F0000000001030307) Nov 23 04:36:01 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:36:01 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:01 localhost nova_compute[229707]: 2025-11-23 09:36:01.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:36:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:36:02 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:36:02 localhost systemd[1]: tmp-crun.0BpVFz.mount: Deactivated successfully. Nov 23 04:36:02 localhost podman[246712]: 2025-11-23 09:36:02.919656293 +0000 UTC m=+0.095511410 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 23 04:36:02 localhost podman[246712]: 2025-11-23 09:36:02.930380385 +0000 UTC m=+0.106235582 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118) Nov 23 04:36:02 localhost nova_compute[229707]: 2025-11-23 09:36:02.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:36:02 localhost nova_compute[229707]: 2025-11-23 09:36:02.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:36:02 localhost nova_compute[229707]: 2025-11-23 09:36:02.975 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:36:02 localhost nova_compute[229707]: 2025-11-23 09:36:02.975 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:36:02 localhost nova_compute[229707]: 2025-11-23 09:36:02.975 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:36:02 localhost nova_compute[229707]: 2025-11-23 09:36:02.976 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:36:02 localhost nova_compute[229707]: 2025-11-23 09:36:02.976 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:36:03 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:36:03 localhost nova_compute[229707]: 2025-11-23 09:36:03.434 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:36:03 localhost nova_compute[229707]: 2025-11-23 09:36:03.591 229711 WARNING nova.virt.libvirt.driver [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:36:03 localhost nova_compute[229707]: 2025-11-23 09:36:03.592 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12898MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:36:03 localhost nova_compute[229707]: 2025-11-23 09:36:03.592 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:36:03 localhost nova_compute[229707]: 2025-11-23 09:36:03.593 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:36:03 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:36:03 localhost nova_compute[229707]: 2025-11-23 09:36:03.641 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:36:03 localhost nova_compute[229707]: 2025-11-23 09:36:03.641 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:36:03 localhost nova_compute[229707]: 2025-11-23 09:36:03.659 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:36:03 localhost systemd[1]: var-lib-containers-storage-overlay-1e6bca165dc068117408960944f8051008cf6e2ead3af3bab76d84f596641295-merged.mount: Deactivated successfully. Nov 23 04:36:04 localhost nova_compute[229707]: 2025-11-23 09:36:04.159 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:36:04 localhost nova_compute[229707]: 2025-11-23 09:36:04.164 229711 DEBUG nova.compute.provider_tree [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:36:04 localhost nova_compute[229707]: 2025-11-23 09:36:04.177 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:36:04 localhost nova_compute[229707]: 2025-11-23 09:36:04.181 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:36:04 localhost nova_compute[229707]: 2025-11-23 09:36:04.181 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:36:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60760 DF PROTO=TCP SPT=52562 DPT=9100 SEQ=971688124 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7FD40F0000000001030307) Nov 23 04:36:04 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Nov 23 04:36:04 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 23 04:36:04 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 23 04:36:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:36:05 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:05 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:36:05 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:36:05 localhost podman[246879]: 2025-11-23 09:36:05.622440132 +0000 UTC m=+0.807123014 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:36:05 localhost podman[246879]: 2025-11-23 09:36:05.655931926 +0000 UTC m=+0.840614838 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:36:05 localhost podman[246879]: unhealthy Nov 23 04:36:06 localhost nova_compute[229707]: 2025-11-23 09:36:06.178 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:36:06 localhost nova_compute[229707]: 2025-11-23 09:36:06.178 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:36:06 localhost nova_compute[229707]: 2025-11-23 09:36:06.194 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:36:06 localhost nova_compute[229707]: 2025-11-23 09:36:06.194 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:36:06 localhost nova_compute[229707]: 2025-11-23 09:36:06.194 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:36:06 localhost nova_compute[229707]: 2025-11-23 09:36:06.206 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:36:06 localhost nova_compute[229707]: 2025-11-23 09:36:06.207 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:36:06 localhost nova_compute[229707]: 2025-11-23 09:36:06.207 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:36:06 localhost nova_compute[229707]: 2025-11-23 09:36:06.207 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:36:06 localhost nova_compute[229707]: 2025-11-23 09:36:06.208 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:36:06 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Nov 23 04:36:06 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Nov 23 04:36:06 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:06 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:36:06 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Failed with result 'exit-code'. Nov 23 04:36:06 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:06 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:06 localhost podman[239764]: time="2025-11-23T09:36:06Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged: invalid argument" Nov 23 04:36:06 localhost podman[239764]: time="2025-11-23T09:36:06Z" level=error msg="Getting root fs size for \"e85231ec2e9ac989d4a05967c8ecee50bf2192d67a5312e71a42be3fa10fc73f\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": creating overlay mount to /var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/UWKSHRGV4S6O6XDE2QFRM5ZKX7,upperdir=/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/diff,workdir=/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/work,nodev,metacopy=on\": no such file or directory" Nov 23 04:36:07 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13209 DF PROTO=TCP SPT=53180 DPT=9100 SEQ=414079835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7FDFCF0000000001030307) Nov 23 04:36:07 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:36:07 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:07 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:07 localhost nova_compute[229707]: 2025-11-23 09:36:07.947 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:36:08 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 23 04:36:08 localhost systemd[1]: var-lib-containers-storage-overlay-27f21f97cd586034c01317a093c3b8fe15f0b3ee8df18d3620dc21eab57815fe-merged.mount: Deactivated successfully. Nov 23 04:36:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:36:09.717 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:36:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:36:09.717 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:36:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:36:09.717 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:36:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28577 DF PROTO=TCP SPT=44316 DPT=9105 SEQ=923958569 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7FE98F0000000001030307) Nov 23 04:36:10 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully. Nov 23 04:36:10 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully. Nov 23 04:36:10 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully. Nov 23 04:36:11 localhost systemd[1]: var-lib-containers-storage-overlay-d9ec08b5fa040dfb6133696a337502d9c613a89061aa958f367079d48924e617-merged.mount: Deactivated successfully. Nov 23 04:36:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:36:11 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:36:11 localhost systemd[1]: tmp-crun.FPA7kX.mount: Deactivated successfully. Nov 23 04:36:11 localhost podman[246938]: 2025-11-23 09:36:11.924203743 +0000 UTC m=+0.107678706 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:36:11 localhost podman[246938]: 2025-11-23 09:36:11.957641125 +0000 UTC m=+0.141116088 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:36:12 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully. Nov 23 04:36:13 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:36:13 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:36:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28578 DF PROTO=TCP SPT=44316 DPT=9105 SEQ=923958569 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B7FF9500000000001030307) Nov 23 04:36:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:36:13 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:36:13 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:36:13 localhost podman[246957]: 2025-11-23 09:36:13.989970082 +0000 UTC m=+0.145338979 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Nov 23 04:36:14 localhost podman[246957]: 2025-11-23 09:36:14.064456902 +0000 UTC m=+0.219825849 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:36:14 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:36:14 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:36:15 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:36:15 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:15 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:36:15 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:36:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:36:16 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:36:16 localhost podman[246981]: 2025-11-23 09:36:16.112833453 +0000 UTC m=+0.095361915 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Nov 23 04:36:16 localhost podman[246981]: 2025-11-23 09:36:16.123551884 +0000 UTC m=+0.106080386 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:36:16 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:36:16 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:16 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3034 DF PROTO=TCP SPT=50528 DPT=9882 SEQ=1638765155 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8006130000000001030307) Nov 23 04:36:17 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:17 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:36:17 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:36:17 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:18 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45579 DF PROTO=TCP SPT=42780 DPT=9102 SEQ=557433583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B800EF40000000001030307) Nov 23 04:36:19 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully. Nov 23 04:36:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:36:19 localhost podman[246998]: 2025-11-23 09:36:19.838860718 +0000 UTC m=+0.079043451 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:36:19 localhost podman[246998]: 2025-11-23 09:36:19.846275467 +0000 UTC m=+0.086458150 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:36:20 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:20 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Nov 23 04:36:20 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Nov 23 04:36:21 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:36:21 localhost systemd[1]: var-lib-containers-storage-overlay-1e6bca165dc068117408960944f8051008cf6e2ead3af3bab76d84f596641295-merged.mount: Deactivated successfully. Nov 23 04:36:21 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:36:21 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:21 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:21 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:36:21 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:21 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28579 DF PROTO=TCP SPT=44316 DPT=9105 SEQ=923958569 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B801A0F0000000001030307) Nov 23 04:36:22 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 23 04:36:22 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:22 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Nov 23 04:36:23 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Nov 23 04:36:23 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:23 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:23 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Nov 23 04:36:23 localhost systemd[1]: var-lib-containers-storage-overlay-8e7bd885aaea36c7d0396504cf30e2cbd3831af3abfb23178a603b6dde37fd5f-merged.mount: Deactivated successfully. Nov 23 04:36:23 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:23 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Nov 23 04:36:23 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:23 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:23 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Nov 23 04:36:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:24 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20893 DF PROTO=TCP SPT=35880 DPT=9101 SEQ=1967945574 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B80240F0000000001030307) Nov 23 04:36:24 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:36:24 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 23 04:36:24 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:36:25 localhost systemd[1]: var-lib-containers-storage-overlay-27f21f97cd586034c01317a093c3b8fe15f0b3ee8df18d3620dc21eab57815fe-merged.mount: Deactivated successfully. Nov 23 04:36:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:36:26 localhost systemd[1]: tmp-crun.QvENMA.mount: Deactivated successfully. Nov 23 04:36:26 localhost podman[247020]: 2025-11-23 09:36:26.899724149 +0000 UTC m=+0.089708461 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Nov 23 04:36:26 localhost podman[247020]: 2025-11-23 09:36:26.950429184 +0000 UTC m=+0.140413456 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, com.redhat.component=ubi9-minimal-container) Nov 23 04:36:27 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully. Nov 23 04:36:27 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully. Nov 23 04:36:27 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 23 04:36:27 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Nov 23 04:36:28 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Nov 23 04:36:28 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:36:29 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:36:29 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully. Nov 23 04:36:29 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:29 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully. Nov 23 04:36:30 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 23 04:36:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25960 DF PROTO=TCP SPT=42938 DPT=9100 SEQ=1257928008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B80393E0000000001030307) Nov 23 04:36:30 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:36:30 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully. Nov 23 04:36:30 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:30 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:31 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully. Nov 23 04:36:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25961 DF PROTO=TCP SPT=42938 DPT=9100 SEQ=1257928008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B803D4F0000000001030307) Nov 23 04:36:31 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:31 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully. Nov 23 04:36:32 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:36:32 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:36:33 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully. Nov 23 04:36:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:36:33 localhost podman[247040]: 2025-11-23 09:36:33.904064365 +0000 UTC m=+0.091868168 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 23 04:36:33 localhost podman[247040]: 2025-11-23 09:36:33.914369242 +0000 UTC m=+0.102173025 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:36:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37447 DF PROTO=TCP SPT=53618 DPT=9100 SEQ=2904292758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B80480F0000000001030307) Nov 23 04:36:34 localhost sshd[247058]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:36:34 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Nov 23 04:36:34 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:34 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Nov 23 04:36:34 localhost systemd[1]: var-lib-containers-storage-overlay-9c4111208bb4df9dc3ebb7e3559f6e6ef9e3a799913516b11d0a24f531e03ec5-merged.mount: Deactivated successfully. Nov 23 04:36:34 localhost systemd-logind[760]: New session 56 of user zuul. Nov 23 04:36:34 localhost systemd[1]: Started Session 56 of User zuul. Nov 23 04:36:34 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:36:34 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:34 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:34 localhost python3.9[247154]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:36:34 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:35 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:35 localhost python3.9[247264]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:36:35 localhost python3.9[247352]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890594.903236-3065-167476534572414/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:36:35 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:36:36 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Nov 23 04:36:36 localhost systemd[1]: var-lib-containers-storage-overlay-8e7bd885aaea36c7d0396504cf30e2cbd3831af3abfb23178a603b6dde37fd5f-merged.mount: Deactivated successfully. Nov 23 04:36:36 localhost systemd[1]: var-lib-containers-storage-overlay-8e7bd885aaea36c7d0396504cf30e2cbd3831af3abfb23178a603b6dde37fd5f-merged.mount: Deactivated successfully. Nov 23 04:36:36 localhost python3.9[247462]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:36:36 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:36:36 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:36:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:36:36 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Nov 23 04:36:37 localhost podman[247487]: 2025-11-23 09:36:37.029074981 +0000 UTC m=+0.100925518 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:36:37 localhost podman[247487]: 2025-11-23 09:36:37.070533171 +0000 UTC m=+0.142383738 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:36:37 localhost podman[247487]: unhealthy Nov 23 04:36:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25963 DF PROTO=TCP SPT=42938 DPT=9100 SEQ=1257928008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B80550F0000000001030307) Nov 23 04:36:37 localhost python3.9[247593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:36:37 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Main process exited, code=exited, status=1/FAILURE Nov 23 04:36:37 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Failed with result 'exit-code'. Nov 23 04:36:37 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:37 localhost python3.9[247650]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:36:37 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:37 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:36:37 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:36:38 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:36:38 localhost python3.9[247760]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:36:38 localhost systemd[1]: var-lib-containers-storage-overlay-e546b43998c4baeffb211550b485789cc63029e7ea531c7f56d56436d5a4e45a-merged.mount: Deactivated successfully. Nov 23 04:36:38 localhost systemd[1]: var-lib-containers-storage-overlay-e546b43998c4baeffb211550b485789cc63029e7ea531c7f56d56436d5a4e45a-merged.mount: Deactivated successfully. Nov 23 04:36:38 localhost python3.9[247817]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.khffixpf recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:36:39 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:39 localhost python3.9[247927]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:36:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25613 DF PROTO=TCP SPT=59468 DPT=9105 SEQ=2568109852 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B805ED30000000001030307) Nov 23 04:36:39 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:36:40 localhost python3.9[247984]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:36:40 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:40 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:40 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 23 04:36:41 localhost python3.9[248094]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:36:41 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Nov 23 04:36:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:42 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:36:42 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:36:42 localhost python3[248205]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Nov 23 04:36:43 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:43 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 23 04:36:43 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 23 04:36:43 localhost python3.9[248315]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:36:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25614 DF PROTO=TCP SPT=59468 DPT=9105 SEQ=2568109852 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B806E8F0000000001030307) Nov 23 04:36:43 localhost python3.9[248372]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:36:44 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:36:44 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:44 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:36:44 localhost podman[248373]: 2025-11-23 09:36:44.604412076 +0000 UTC m=+0.081527585 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 23 04:36:44 localhost podman[248373]: 2025-11-23 09:36:44.635106412 +0000 UTC m=+0.112221921 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Nov 23 04:36:44 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:44 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:44 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:36:44 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:44 localhost systemd[1]: var-lib-containers-storage-overlay-905224559a7d60475da0e4c3f30d7498ad95ab3c6c514578f0762871cf74312f-merged.mount: Deactivated successfully. Nov 23 04:36:45 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:36:45 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:45 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:45 localhost python3.9[248500]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:36:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25964 DF PROTO=TCP SPT=42938 DPT=9100 SEQ=1257928008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B80760F0000000001030307) Nov 23 04:36:46 localhost python3.9[248557]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:36:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:36:46 localhost systemd[1]: tmp-crun.Jd3eTj.mount: Deactivated successfully. Nov 23 04:36:46 localhost podman[248667]: 2025-11-23 09:36:46.723818927 +0000 UTC m=+0.094730552 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, tcib_managed=true) Nov 23 04:36:46 localhost python3.9[248668]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:36:46 localhost podman[248667]: 2025-11-23 09:36:46.79235111 +0000 UTC m=+0.163262685 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible) Nov 23 04:36:47 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:36:47 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:36:47 localhost python3.9[248749]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:36:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:36:47 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:36:47 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Nov 23 04:36:47 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:47 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:36:48 localhost python3.9[248870]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:36:48 localhost podman[248767]: 2025-11-23 09:36:48.032936044 +0000 UTC m=+0.730416754 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 04:36:48 localhost podman[248767]: 2025-11-23 09:36:48.044515311 +0000 UTC m=+0.741996021 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 23 04:36:48 localhost systemd[1]: var-lib-containers-storage-overlay-9c4111208bb4df9dc3ebb7e3559f6e6ef9e3a799913516b11d0a24f531e03ec5-merged.mount: Deactivated successfully. Nov 23 04:36:48 localhost python3.9[248935]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:36:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3039 DF PROTO=TCP SPT=50528 DPT=9882 SEQ=1638765155 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8082100000000001030307) Nov 23 04:36:49 localhost python3.9[249045]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:36:49 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:49 localhost python3.9[249135]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763890608.7924764-3440-261556499202869/.source.nft follow=False _original_basename=ruleset.j2 checksum=953266ca5f7d82d2777a0a437bd7feceb9259ee8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:36:50 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:36:50 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:36:50 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:36:50 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:36:50 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:50 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:50 localhost python3.9[249245]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:36:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:36:51 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:51 localhost systemd[1]: tmp-crun.pif6Ow.mount: Deactivated successfully. Nov 23 04:36:51 localhost podman[249356]: 2025-11-23 09:36:51.413012578 +0000 UTC m=+0.100870401 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 04:36:51 localhost podman[249356]: 2025-11-23 09:36:51.42183802 +0000 UTC m=+0.109695853 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:36:51 localhost podman[239764]: time="2025-11-23T09:36:51Z" level=error msg="Getting root fs size for \"fa9ec3ae0ab60f1eb51ec455fea675b1eb508ae78a05d4cf71d38662d67d4223\": getting diffsize of layer \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\" and its parent \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\": unmounting layer 3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae: replacing mount point \"/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/merged\": device or resource busy" Nov 23 04:36:51 localhost python3.9[249355]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:36:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25615 DF PROTO=TCP SPT=59468 DPT=9105 SEQ=2568109852 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B808E100000000001030307) Nov 23 04:36:51 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:51 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:36:52 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:36:52 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:36:52 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 23 04:36:52 localhost python3.9[249493]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:36:53 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:53 localhost python3.9[249603]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:36:53 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:53 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:36:54 localhost python3.9[249714]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:36:54 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:36:54 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:54 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:36:54 localhost python3.9[249826]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:36:54 localhost podman[239764]: time="2025-11-23T09:36:54Z" level=error msg="Unable to write json: \"write unix /run/podman/podman.sock->@: write: broken pipe\"" Nov 23 04:36:54 localhost podman[239764]: @ - - [23/Nov/2025:09:31:37 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 4096 "" "Go-http-client/1.1" Nov 23 04:36:54 localhost systemd[1]: var-lib-containers-storage-overlay-4dbc0697cef0e6bef85d14d9d3d366e75bfed63e0c2589a8a8c9c892c41f35ba-merged.mount: Deactivated successfully. Nov 23 04:36:55 localhost python3.9[249939]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:36:56 localhost systemd[1]: session-56.scope: Deactivated successfully. Nov 23 04:36:56 localhost systemd[1]: session-56.scope: Consumed 12.744s CPU time. Nov 23 04:36:56 localhost systemd-logind[760]: Session 56 logged out. Waiting for processes to exit. Nov 23 04:36:56 localhost systemd-logind[760]: Removed session 56. Nov 23 04:36:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8954 DF PROTO=TCP SPT=44454 DPT=9102 SEQ=5692641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B809FCF0000000001030307) Nov 23 04:36:56 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:36:56 localhost systemd[1]: var-lib-containers-storage-overlay-905224559a7d60475da0e4c3f30d7498ad95ab3c6c514578f0762871cf74312f-merged.mount: Deactivated successfully. Nov 23 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:36:58 localhost podman[249957]: 2025-11-23 09:36:58.916859409 +0000 UTC m=+0.100530721 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, distribution-scope=public) Nov 23 04:36:58 localhost podman[249957]: 2025-11-23 09:36:58.930275793 +0000 UTC m=+0.113947095 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible) Nov 23 04:36:59 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:36:59 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:36:59 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:36:59 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:37:01 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:37:01 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:37:01 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 23 04:37:02 localhost sshd[249977]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:37:02 localhost systemd-logind[760]: New session 57 of user zuul. Nov 23 04:37:02 localhost systemd[1]: Started Session 57 of User zuul. Nov 23 04:37:02 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:37:02 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:37:02 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 23 04:37:02 localhost nova_compute[229707]: 2025-11-23 09:37:02.945 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:37:02 localhost nova_compute[229707]: 2025-11-23 09:37:02.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:37:02 localhost nova_compute[229707]: 2025-11-23 09:37:02.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:37:02 localhost nova_compute[229707]: 2025-11-23 09:37:02.964 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:37:02 localhost nova_compute[229707]: 2025-11-23 09:37:02.964 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:37:02 localhost nova_compute[229707]: 2025-11-23 09:37:02.965 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:37:02 localhost nova_compute[229707]: 2025-11-23 09:37:02.965 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:37:02 localhost nova_compute[229707]: 2025-11-23 09:37:02.965 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:37:03 localhost sshd[250110]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:37:03 localhost python3.9[250111]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:37:03 localhost nova_compute[229707]: 2025-11-23 09:37:03.425 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:37:03 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 23 04:37:03 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:37:03 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 23 04:37:03 localhost nova_compute[229707]: 2025-11-23 09:37:03.633 229711 WARNING nova.virt.libvirt.driver [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:37:03 localhost nova_compute[229707]: 2025-11-23 09:37:03.636 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=13076MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:37:03 localhost nova_compute[229707]: 2025-11-23 09:37:03.636 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:37:03 localhost nova_compute[229707]: 2025-11-23 09:37:03.637 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:37:03 localhost nova_compute[229707]: 2025-11-23 09:37:03.730 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:37:03 localhost nova_compute[229707]: 2025-11-23 09:37:03.731 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:37:03 localhost nova_compute[229707]: 2025-11-23 09:37:03.752 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:37:03 localhost python3.9[250225]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:37:04 localhost nova_compute[229707]: 2025-11-23 09:37:04.201 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:37:04 localhost nova_compute[229707]: 2025-11-23 09:37:04.207 229711 DEBUG nova.compute.provider_tree [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:37:04 localhost nova_compute[229707]: 2025-11-23 09:37:04.226 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:37:04 localhost nova_compute[229707]: 2025-11-23 09:37:04.229 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:37:04 localhost nova_compute[229707]: 2025-11-23 09:37:04.229 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:37:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:37:04 localhost podman[250357]: 2025-11-23 09:37:04.532764475 +0000 UTC m=+0.083604959 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 23 04:37:04 localhost podman[250357]: 2025-11-23 09:37:04.544170437 +0000 UTC m=+0.095010911 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, config_id=edpm) Nov 23 04:37:04 localhost python3.9[250356]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:37:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8955 DF PROTO=TCP SPT=44454 DPT=9102 SEQ=5692641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B80C0100000000001030307) Nov 23 04:37:05 localhost python3.9[250481]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:37:06 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 23 04:37:06 localhost python3.9[250567]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890624.8494782-104-147849871776860/.source.yaml follow=False _original_basename=neutron_sriov_agent.yaml.j2 checksum=d3942d8476d006ea81540d2a1d96dd9d67f33f5f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:37:06 localhost systemd[1]: var-lib-containers-storage-overlay-4dbc0697cef0e6bef85d14d9d3d366e75bfed63e0c2589a8a8c9c892c41f35ba-merged.mount: Deactivated successfully. Nov 23 04:37:06 localhost nova_compute[229707]: 2025-11-23 09:37:06.230 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:37:06 localhost nova_compute[229707]: 2025-11-23 09:37:06.230 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:37:06 localhost nova_compute[229707]: 2025-11-23 09:37:06.230 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:37:06 localhost systemd[1]: var-lib-containers-storage-overlay-4dbc0697cef0e6bef85d14d9d3d366e75bfed63e0c2589a8a8c9c892c41f35ba-merged.mount: Deactivated successfully. Nov 23 04:37:06 localhost podman[239764]: @ - - [23/Nov/2025:09:31:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 140428 "" "Go-http-client/1.1" Nov 23 04:37:06 localhost podman_exporter[239970]: ts=2025-11-23T09:37:06.378Z caller=exporter.go:96 level=info msg="Listening on" address=:9882 Nov 23 04:37:06 localhost podman_exporter[239970]: ts=2025-11-23T09:37:06.379Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882 Nov 23 04:37:06 localhost podman_exporter[239970]: ts=2025-11-23T09:37:06.379Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9882 Nov 23 04:37:06 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:37:06 localhost openstack_network_exporter[241732]: ERROR 09:37:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:37:06 localhost openstack_network_exporter[241732]: ERROR 09:37:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:37:06 localhost openstack_network_exporter[241732]: ERROR 09:37:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:37:06 localhost openstack_network_exporter[241732]: ERROR 09:37:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:37:06 localhost openstack_network_exporter[241732]: Nov 23 04:37:06 localhost openstack_network_exporter[241732]: ERROR 09:37:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:37:06 localhost openstack_network_exporter[241732]: Nov 23 04:37:06 localhost python3.9[250675]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:37:06 localhost nova_compute[229707]: 2025-11-23 09:37:06.942 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:37:07 localhost python3.9[250767]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890626.3179688-149-43808792800665/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:37:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:37:07 localhost python3.9[250875]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:37:07 localhost podman[250876]: 2025-11-23 09:37:07.898530838 +0000 UTC m=+0.080149152 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:37:07 localhost podman[250876]: 2025-11-23 09:37:07.912387766 +0000 UTC m=+0.094006100 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:37:07 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:37:07 localhost nova_compute[229707]: 2025-11-23 09:37:07.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:37:07 localhost nova_compute[229707]: 2025-11-23 09:37:07.946 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:37:07 localhost nova_compute[229707]: 2025-11-23 09:37:07.946 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:37:07 localhost nova_compute[229707]: 2025-11-23 09:37:07.962 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:37:07 localhost nova_compute[229707]: 2025-11-23 09:37:07.963 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:37:08 localhost python3.9[250984]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890627.3862808-149-145875609513199/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:37:08 localhost python3.9[251109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:37:09 localhost python3.9[251228]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890628.4172676-149-274597568198864/.source.conf follow=False _original_basename=neutron-sriov-agent.conf.j2 checksum=cf90d5c75057f9272a77f0b4cd12be7235232c2d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:37:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:37:09.718 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:37:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:37:09.719 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:37:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:37:09.719 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:37:09 localhost nova_compute[229707]: 2025-11-23 09:37:09.947 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:37:10 localhost python3.9[251370]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:37:11 localhost python3.9[251456]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890630.2808173-323-248588099368875/.source.conf _original_basename=10-neutron-sriov.conf follow=False checksum=1067e04911e84d9dc262158a63dd8e464b0e5dfd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:37:11 localhost python3.9[251564]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.571 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.572 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.572 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.572 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:37:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:37:12 localhost python3.9[251676]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:37:13 localhost python3.9[251786]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:37:13 localhost python3.9[251843]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:37:14 localhost python3.9[251953]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:37:14 localhost python3.9[252010]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:37:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:37:14 localhost podman[252028]: 2025-11-23 09:37:14.907658764 +0000 UTC m=+0.082912527 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:37:14 localhost podman[252028]: 2025-11-23 09:37:14.943291423 +0000 UTC m=+0.118545176 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:37:14 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:37:15 localhost python3.9[252136]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:37:16 localhost python3.9[252246]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:37:17 localhost podman[239764]: time="2025-11-23T09:37:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:37:17 localhost podman[239764]: @ - - [23/Nov/2025:09:37:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142072 "" "Go-http-client/1.1" Nov 23 04:37:17 localhost podman[239764]: @ - - [23/Nov/2025:09:37:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15870 "" "Go-http-client/1.1" Nov 23 04:37:17 localhost python3.9[252303]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:37:17 localhost python3.9[252414]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:37:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:37:18 localhost systemd[1]: tmp-crun.kL3kic.mount: Deactivated successfully. Nov 23 04:37:18 localhost podman[252471]: 2025-11-23 09:37:18.20475162 +0000 UTC m=+0.086394754 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Nov 23 04:37:18 localhost podman[252471]: 2025-11-23 09:37:18.268415643 +0000 UTC m=+0.150058757 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Nov 23 04:37:18 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:37:18 localhost python3.9[252472]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:37:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16156 DF PROTO=TCP SPT=36500 DPT=9102 SEQ=495660819 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B80F9540000000001030307) Nov 23 04:37:19 localhost auditd[726]: Audit daemon rotating log files Nov 23 04:37:20 localhost python3.9[252607]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:37:20 localhost systemd[1]: Reloading. Nov 23 04:37:20 localhost systemd-rc-local-generator[252632]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:37:20 localhost systemd-sysv-generator[252637]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:37:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16157 DF PROTO=TCP SPT=36500 DPT=9102 SEQ=495660819 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B80FD4F0000000001030307) Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:37:20 localhost podman[252645]: 2025-11-23 09:37:20.597950995 +0000 UTC m=+0.079007998 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:37:20 localhost podman[252645]: 2025-11-23 09:37:20.639379711 +0000 UTC m=+0.120436674 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 04:37:20 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:37:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8956 DF PROTO=TCP SPT=44454 DPT=9102 SEQ=5692641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B81000F0000000001030307) Nov 23 04:37:21 localhost python3.9[252773]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:37:22 localhost python3.9[252830]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:37:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16158 DF PROTO=TCP SPT=36500 DPT=9102 SEQ=495660819 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B81054F0000000001030307) Nov 23 04:37:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:37:22 localhost podman[252919]: 2025-11-23 09:37:22.886070888 +0000 UTC m=+0.067452611 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:37:22 localhost podman[252919]: 2025-11-23 09:37:22.899353608 +0000 UTC m=+0.080735361 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:37:22 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:37:23 localhost python3.9[252964]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:37:23 localhost python3.9[253021]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:37:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45585 DF PROTO=TCP SPT=42780 DPT=9102 SEQ=557433583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B810A100000000001030307) Nov 23 04:37:24 localhost python3.9[253131]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:37:24 localhost systemd[1]: Reloading. Nov 23 04:37:24 localhost systemd-rc-local-generator[253157]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:37:24 localhost systemd-sysv-generator[253162]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:24 localhost systemd[1]: Starting Create netns directory... Nov 23 04:37:24 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 23 04:37:24 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 23 04:37:24 localhost systemd[1]: Finished Create netns directory. Nov 23 04:37:25 localhost python3.9[253284]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:37:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16159 DF PROTO=TCP SPT=36500 DPT=9102 SEQ=495660819 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B81150F0000000001030307) Nov 23 04:37:26 localhost python3.9[253394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_sriov_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:37:27 localhost python3.9[253482]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_sriov_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890646.0402813-734-49371374747217/.source.json _original_basename=.g3mhg5pt follow=False checksum=a32073fdba4733b9ffe872cfb91708eff83a585a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:37:28 localhost python3.9[253592]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:37:29 localhost systemd[1]: tmp-crun.Oikk5T.mount: Deactivated successfully. Nov 23 04:37:29 localhost podman[253846]: 2025-11-23 09:37:29.903150099 +0000 UTC m=+0.092196945 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, config_id=edpm, container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, distribution-scope=public, architecture=x86_64, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git) Nov 23 04:37:29 localhost podman[253846]: 2025-11-23 09:37:29.913291732 +0000 UTC m=+0.102338608 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers) Nov 23 04:37:29 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:37:30 localhost python3.9[253920]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_pattern=*.json debug=False Nov 23 04:37:31 localhost python3.9[254030]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 23 04:37:33 localhost python3.9[254140]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Nov 23 04:37:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16160 DF PROTO=TCP SPT=36500 DPT=9102 SEQ=495660819 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8136100000000001030307) Nov 23 04:37:36 localhost openstack_network_exporter[241732]: ERROR 09:37:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:37:36 localhost openstack_network_exporter[241732]: ERROR 09:37:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:37:36 localhost openstack_network_exporter[241732]: ERROR 09:37:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:37:36 localhost openstack_network_exporter[241732]: ERROR 09:37:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:37:36 localhost openstack_network_exporter[241732]: Nov 23 04:37:36 localhost openstack_network_exporter[241732]: ERROR 09:37:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:37:36 localhost openstack_network_exporter[241732]: Nov 23 04:37:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:37:36 localhost systemd[1]: tmp-crun.b3w6kd.mount: Deactivated successfully. Nov 23 04:37:36 localhost podman[254222]: 2025-11-23 09:37:36.905810157 +0000 UTC m=+0.088297984 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 23 04:37:36 localhost podman[254222]: 2025-11-23 09:37:36.919375755 +0000 UTC m=+0.101863572 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 23 04:37:36 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:37:37 localhost python3[254296]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_id=neutron_sriov_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Nov 23 04:37:37 localhost podman[254336]: Nov 23 04:37:37 localhost podman[254336]: 2025-11-23 09:37:37.750602196 +0000 UTC m=+0.088514381 container create 7b7cac1346f87cd56965f757c3f18af13c031a8f88e0c7de4ff0fa3283675cb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, container_name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9f3fb1e0d9d8b72db7f4a31de2bce5e099af2d170b33036cb70ee74cfbd0a1b3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 23 04:37:37 localhost podman[254336]: 2025-11-23 09:37:37.709958943 +0000 UTC m=+0.047871128 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Nov 23 04:37:37 localhost python3[254296]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_sriov_agent --conmon-pidfile /run/neutron_sriov_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=9f3fb1e0d9d8b72db7f4a31de2bce5e099af2d170b33036cb70ee74cfbd0a1b3 --label config_id=neutron_sriov_agent --label container_name=neutron_sriov_agent --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9f3fb1e0d9d8b72db7f4a31de2bce5e099af2d170b33036cb70ee74cfbd0a1b3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user neutron --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Nov 23 04:37:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:37:38 localhost podman[254485]: 2025-11-23 09:37:38.466631344 +0000 UTC m=+0.078270564 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:37:38 localhost podman[254485]: 2025-11-23 09:37:38.479255694 +0000 UTC m=+0.090894914 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:37:38 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:37:38 localhost python3.9[254484]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:37:39 localhost python3.9[254620]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:37:39 localhost python3.9[254675]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:37:40 localhost python3.9[254784]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890659.725972-998-10950655236880/source dest=/etc/systemd/system/edpm_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:37:40 localhost python3.9[254839]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:37:40 localhost systemd[1]: Reloading. Nov 23 04:37:40 localhost systemd-rc-local-generator[254861]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:37:40 localhost systemd-sysv-generator[254866]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:41 localhost python3.9[254930]: ansible-systemd Invoked with state=restarted name=edpm_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:37:42 localhost systemd[1]: Reloading. Nov 23 04:37:43 localhost systemd-sysv-generator[254961]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:37:43 localhost systemd-rc-local-generator[254958]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:37:43 localhost systemd[1]: Starting neutron_sriov_agent container... Nov 23 04:37:43 localhost systemd[1]: tmp-crun.6VZrFl.mount: Deactivated successfully. Nov 23 04:37:43 localhost systemd[1]: Started libcrun container. Nov 23 04:37:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1b4e338f5a2b4e7e4a1107700a823595d4c734851627d7249d4dea4573b8fb/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Nov 23 04:37:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1b4e338f5a2b4e7e4a1107700a823595d4c734851627d7249d4dea4573b8fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 04:37:43 localhost podman[254971]: 2025-11-23 09:37:43.376734368 +0000 UTC m=+0.120367549 container init 7b7cac1346f87cd56965f757c3f18af13c031a8f88e0c7de4ff0fa3283675cb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9f3fb1e0d9d8b72db7f4a31de2bce5e099af2d170b33036cb70ee74cfbd0a1b3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=neutron_sriov_agent, managed_by=edpm_ansible) Nov 23 04:37:43 localhost podman[254971]: 2025-11-23 09:37:43.387010116 +0000 UTC m=+0.130643297 container start 7b7cac1346f87cd56965f757c3f18af13c031a8f88e0c7de4ff0fa3283675cb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9f3fb1e0d9d8b72db7f4a31de2bce5e099af2d170b33036cb70ee74cfbd0a1b3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible) Nov 23 04:37:43 localhost podman[254971]: neutron_sriov_agent Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: + sudo -E kolla_set_configs Nov 23 04:37:43 localhost systemd[1]: Started neutron_sriov_agent container. Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Validating config file Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Copying service configuration files Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Writing out command to execute Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Setting permission for /var/lib/neutron Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Setting permission for /var/lib/neutron/external Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: ++ cat /run_command Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: + CMD=/usr/bin/neutron-sriov-nic-agent Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: + ARGS= Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: + sudo kolla_copy_cacerts Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: + [[ ! -n '' ]] Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: + . kolla_extend_start Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: Running command: '/usr/bin/neutron-sriov-nic-agent' Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: + umask 0022 Nov 23 04:37:43 localhost neutron_sriov_agent[254985]: + exec /usr/bin/neutron-sriov-nic-agent Nov 23 04:37:44 localhost python3.9[255109]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_sriov_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:37:44 localhost systemd[1]: Stopping neutron_sriov_agent container... Nov 23 04:37:44 localhost systemd[1]: libpod-7b7cac1346f87cd56965f757c3f18af13c031a8f88e0c7de4ff0fa3283675cb3.scope: Deactivated successfully. Nov 23 04:37:44 localhost systemd[1]: libpod-7b7cac1346f87cd56965f757c3f18af13c031a8f88e0c7de4ff0fa3283675cb3.scope: Consumed 1.250s CPU time. Nov 23 04:37:44 localhost podman[255113]: 2025-11-23 09:37:44.651356288 +0000 UTC m=+0.067286759 container died 7b7cac1346f87cd56965f757c3f18af13c031a8f88e0c7de4ff0fa3283675cb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9f3fb1e0d9d8b72db7f4a31de2bce5e099af2d170b33036cb70ee74cfbd0a1b3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118) Nov 23 04:37:44 localhost podman[255113]: 2025-11-23 09:37:44.704046196 +0000 UTC m=+0.119976617 container cleanup 7b7cac1346f87cd56965f757c3f18af13c031a8f88e0c7de4ff0fa3283675cb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20251118, config_id=neutron_sriov_agent, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9f3fb1e0d9d8b72db7f4a31de2bce5e099af2d170b33036cb70ee74cfbd0a1b3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 04:37:44 localhost podman[255113]: neutron_sriov_agent Nov 23 04:37:44 localhost podman[255125]: 2025-11-23 09:37:44.706835112 +0000 UTC m=+0.051422350 container cleanup 7b7cac1346f87cd56965f757c3f18af13c031a8f88e0c7de4ff0fa3283675cb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9f3fb1e0d9d8b72db7f4a31de2bce5e099af2d170b33036cb70ee74cfbd0a1b3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:37:44 localhost podman[255140]: 2025-11-23 09:37:44.787262335 +0000 UTC m=+0.054688110 container cleanup 7b7cac1346f87cd56965f757c3f18af13c031a8f88e0c7de4ff0fa3283675cb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9f3fb1e0d9d8b72db7f4a31de2bce5e099af2d170b33036cb70ee74cfbd0a1b3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=neutron_sriov_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Nov 23 04:37:44 localhost podman[255140]: neutron_sriov_agent Nov 23 04:37:44 localhost systemd[1]: edpm_neutron_sriov_agent.service: Deactivated successfully. Nov 23 04:37:44 localhost systemd[1]: Stopped neutron_sriov_agent container. Nov 23 04:37:44 localhost systemd[1]: Starting neutron_sriov_agent container... Nov 23 04:37:44 localhost systemd[1]: Started libcrun container. Nov 23 04:37:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1b4e338f5a2b4e7e4a1107700a823595d4c734851627d7249d4dea4573b8fb/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Nov 23 04:37:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1b4e338f5a2b4e7e4a1107700a823595d4c734851627d7249d4dea4573b8fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 04:37:44 localhost podman[255151]: 2025-11-23 09:37:44.925914578 +0000 UTC m=+0.110152683 container init 7b7cac1346f87cd56965f757c3f18af13c031a8f88e0c7de4ff0fa3283675cb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9f3fb1e0d9d8b72db7f4a31de2bce5e099af2d170b33036cb70ee74cfbd0a1b3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, managed_by=edpm_ansible, config_id=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true) Nov 23 04:37:44 localhost podman[255151]: 2025-11-23 09:37:44.934594886 +0000 UTC m=+0.118833001 container start 7b7cac1346f87cd56965f757c3f18af13c031a8f88e0c7de4ff0fa3283675cb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9f3fb1e0d9d8b72db7f4a31de2bce5e099af2d170b33036cb70ee74cfbd0a1b3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=neutron_sriov_agent, org.label-schema.schema-version=1.0) Nov 23 04:37:44 localhost podman[255151]: neutron_sriov_agent Nov 23 04:37:44 localhost neutron_sriov_agent[255165]: + sudo -E kolla_set_configs Nov 23 04:37:44 localhost systemd[1]: Started neutron_sriov_agent container. Nov 23 04:37:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Validating config file Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Copying service configuration files Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Writing out command to execute Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Setting permission for /var/lib/neutron Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Setting permission for /var/lib/neutron/external Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/c8b37597a29e61f341a0e3f5416437aac1a5cd21cb3a407dd674c7a7a1ff41da Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: ++ cat /run_command Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: + CMD=/usr/bin/neutron-sriov-nic-agent Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: + ARGS= Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: + sudo kolla_copy_cacerts Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: + [[ ! -n '' ]] Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: + . kolla_extend_start Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: Running command: '/usr/bin/neutron-sriov-nic-agent' Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: + umask 0022 Nov 23 04:37:45 localhost neutron_sriov_agent[255165]: + exec /usr/bin/neutron-sriov-nic-agent Nov 23 04:37:45 localhost podman[255173]: 2025-11-23 09:37:45.077104948 +0000 UTC m=+0.081657403 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 23 04:37:45 localhost podman[255173]: 2025-11-23 09:37:45.10630057 +0000 UTC m=+0.110852965 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 23 04:37:45 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:37:45 localhost systemd[1]: tmp-crun.ZEtq3o.mount: Deactivated successfully. Nov 23 04:37:45 localhost systemd[1]: session-57.scope: Deactivated successfully. Nov 23 04:37:45 localhost systemd[1]: session-57.scope: Consumed 22.729s CPU time. Nov 23 04:37:45 localhost systemd-logind[760]: Session 57 logged out. Waiting for processes to exit. Nov 23 04:37:45 localhost systemd-logind[760]: Removed session 57. Nov 23 04:37:46 localhost neutron_sriov_agent[255165]: 2025-11-23 09:37:46.609 2 INFO neutron.common.config [-] Logging enabled!#033[00m Nov 23 04:37:46 localhost neutron_sriov_agent[255165]: 2025-11-23 09:37:46.609 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev43#033[00m Nov 23 04:37:46 localhost neutron_sriov_agent[255165]: 2025-11-23 09:37:46.609 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}#033[00m Nov 23 04:37:46 localhost neutron_sriov_agent[255165]: 2025-11-23 09:37:46.609 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}#033[00m Nov 23 04:37:46 localhost neutron_sriov_agent[255165]: 2025-11-23 09:37:46.610 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}#033[00m Nov 23 04:37:46 localhost neutron_sriov_agent[255165]: 2025-11-23 09:37:46.610 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}#033[00m Nov 23 04:37:46 localhost neutron_sriov_agent[255165]: 2025-11-23 09:37:46.610 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005532584.localdomain'}#033[00m Nov 23 04:37:46 localhost neutron_sriov_agent[255165]: 2025-11-23 09:37:46.610 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-bad83464-acbf-40a9-a4b9-4eac33080573 - - - - - -] RPC agent_id: nic-switch-agent.np0005532584.localdomain#033[00m Nov 23 04:37:46 localhost neutron_sriov_agent[255165]: 2025-11-23 09:37:46.615 2 INFO neutron.agent.agent_extensions_manager [None req-bad83464-acbf-40a9-a4b9-4eac33080573 - - - - - -] Loaded agent extensions: ['qos']#033[00m Nov 23 04:37:46 localhost neutron_sriov_agent[255165]: 2025-11-23 09:37:46.615 2 INFO neutron.agent.agent_extensions_manager [None req-bad83464-acbf-40a9-a4b9-4eac33080573 - - - - - -] Initializing agent extension 'qos'#033[00m Nov 23 04:37:46 localhost neutron_sriov_agent[255165]: 2025-11-23 09:37:46.880 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-bad83464-acbf-40a9-a4b9-4eac33080573 - - - - - -] Agent initialized successfully, now running... #033[00m Nov 23 04:37:46 localhost neutron_sriov_agent[255165]: 2025-11-23 09:37:46.881 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-bad83464-acbf-40a9-a4b9-4eac33080573 - - - - - -] SRIOV NIC Agent RPC Daemon Started!#033[00m Nov 23 04:37:46 localhost neutron_sriov_agent[255165]: 2025-11-23 09:37:46.881 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-bad83464-acbf-40a9-a4b9-4eac33080573 - - - - - -] Agent out of sync with plugin!#033[00m Nov 23 04:37:47 localhost podman[239764]: time="2025-11-23T09:37:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:37:47 localhost podman[239764]: @ - - [23/Nov/2025:09:37:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144029 "" "Go-http-client/1.1" Nov 23 04:37:47 localhost podman[239764]: @ - - [23/Nov/2025:09:37:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16296 "" "Go-http-client/1.1" Nov 23 04:37:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:37:48 localhost podman[255217]: 2025-11-23 09:37:48.900762352 +0000 UTC m=+0.083631084 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:37:48 localhost podman[255217]: 2025-11-23 09:37:48.937126765 +0000 UTC m=+0.119995487 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller) Nov 23 04:37:48 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:37:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57479 DF PROTO=TCP SPT=57874 DPT=9102 SEQ=2248671532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B816E840000000001030307) Nov 23 04:37:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57480 DF PROTO=TCP SPT=57874 DPT=9102 SEQ=2248671532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B81728F0000000001030307) Nov 23 04:37:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:37:50 localhost podman[255243]: 2025-11-23 09:37:50.873178886 +0000 UTC m=+0.062389228 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3) Nov 23 04:37:50 localhost podman[255243]: 2025-11-23 09:37:50.885122794 +0000 UTC m=+0.074333136 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:37:50 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:37:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16161 DF PROTO=TCP SPT=36500 DPT=9102 SEQ=495660819 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B81760F0000000001030307) Nov 23 04:37:51 localhost sshd[255263]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:37:51 localhost systemd-logind[760]: New session 58 of user zuul. Nov 23 04:37:51 localhost systemd[1]: Started Session 58 of User zuul. Nov 23 04:37:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57481 DF PROTO=TCP SPT=57874 DPT=9102 SEQ=2248671532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B817A8F0000000001030307) Nov 23 04:37:52 localhost python3.9[255374]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:37:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8957 DF PROTO=TCP SPT=44454 DPT=9102 SEQ=5692641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B817E100000000001030307) Nov 23 04:37:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:37:53 localhost systemd[1]: tmp-crun.Ujrpoo.mount: Deactivated successfully. Nov 23 04:37:53 localhost podman[255489]: 2025-11-23 09:37:53.771788707 +0000 UTC m=+0.104035225 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:37:53 localhost podman[255489]: 2025-11-23 09:37:53.78128659 +0000 UTC m=+0.113533138 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:37:53 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:37:53 localhost python3.9[255488]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 04:37:54 localhost python3.9[255573]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:37:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57482 DF PROTO=TCP SPT=57874 DPT=9102 SEQ=2248671532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B818A4F0000000001030307) Nov 23 04:37:59 localhost python3.9[255685]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 23 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:38:00 localhost podman[255689]: 2025-11-23 09:38:00.325699611 +0000 UTC m=+0.095116549 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, version=9.6, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.tags=minimal rhel9, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, name=ubi9-minimal) Nov 23 04:38:00 localhost podman[255689]: 2025-11-23 09:38:00.337444094 +0000 UTC m=+0.106861022 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=edpm, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 04:38:00 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:38:01 localhost python3.9[255818]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:38:02 localhost python3.9[255928]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:38:02 localhost python3.9[256038]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:38:03 localhost python3.9[256148]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:38:03 localhost python3.9[256258]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:38:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57483 DF PROTO=TCP SPT=57874 DPT=9102 SEQ=2248671532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B81AA0F0000000001030307) Nov 23 04:38:04 localhost python3.9[256368]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:38:04 localhost nova_compute[229707]: 2025-11-23 09:38:04.942 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:38:04 localhost nova_compute[229707]: 2025-11-23 09:38:04.961 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:38:04 localhost nova_compute[229707]: 2025-11-23 09:38:04.961 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:38:04 localhost nova_compute[229707]: 2025-11-23 09:38:04.962 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:38:04 localhost nova_compute[229707]: 2025-11-23 09:38:04.978 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:38:04 localhost nova_compute[229707]: 2025-11-23 09:38:04.978 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:38:04 localhost nova_compute[229707]: 2025-11-23 09:38:04.979 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:38:04 localhost nova_compute[229707]: 2025-11-23 09:38:04.979 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:38:04 localhost nova_compute[229707]: 2025-11-23 09:38:04.980 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:38:05 localhost nova_compute[229707]: 2025-11-23 09:38:05.443 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:38:05 localhost python3.9[256498]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:38:05 localhost nova_compute[229707]: 2025-11-23 09:38:05.606 229711 WARNING nova.virt.libvirt.driver [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:38:05 localhost nova_compute[229707]: 2025-11-23 09:38:05.608 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12955MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:38:05 localhost nova_compute[229707]: 2025-11-23 09:38:05.608 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:38:05 localhost nova_compute[229707]: 2025-11-23 09:38:05.608 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:38:05 localhost nova_compute[229707]: 2025-11-23 09:38:05.684 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:38:05 localhost nova_compute[229707]: 2025-11-23 09:38:05.684 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:38:05 localhost nova_compute[229707]: 2025-11-23 09:38:05.702 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:38:06 localhost nova_compute[229707]: 2025-11-23 09:38:06.157 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:38:06 localhost nova_compute[229707]: 2025-11-23 09:38:06.163 229711 DEBUG nova.compute.provider_tree [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:38:06 localhost nova_compute[229707]: 2025-11-23 09:38:06.184 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:38:06 localhost nova_compute[229707]: 2025-11-23 09:38:06.188 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:38:06 localhost nova_compute[229707]: 2025-11-23 09:38:06.188 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:38:06 localhost python3.9[256632]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:38:06 localhost openstack_network_exporter[241732]: ERROR 09:38:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:38:06 localhost openstack_network_exporter[241732]: ERROR 09:38:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:38:06 localhost openstack_network_exporter[241732]: ERROR 09:38:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:38:06 localhost openstack_network_exporter[241732]: ERROR 09:38:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:38:06 localhost openstack_network_exporter[241732]: Nov 23 04:38:06 localhost openstack_network_exporter[241732]: ERROR 09:38:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:38:06 localhost openstack_network_exporter[241732]: Nov 23 04:38:07 localhost python3.9[256720]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890685.775221-278-39554720560743/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=3ebfe8ab1da42a1c6ca52429f61716009c5fd177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:38:07 localhost nova_compute[229707]: 2025-11-23 09:38:07.173 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:38:07 localhost nova_compute[229707]: 2025-11-23 09:38:07.174 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:38:07 localhost nova_compute[229707]: 2025-11-23 09:38:07.174 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:38:07 localhost python3.9[256828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:38:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:38:07 localhost podman[256845]: 2025-11-23 09:38:07.906807394 +0000 UTC m=+0.089993941 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:38:07 localhost podman[256845]: 2025-11-23 09:38:07.941160615 +0000 UTC m=+0.124347112 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible) Nov 23 04:38:07 localhost nova_compute[229707]: 2025-11-23 09:38:07.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:38:07 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:39:13 localhost nova_compute[229707]: 2025-11-23 09:39:13.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:39:14 localhost rsyslogd[759]: imjournal: 501 messages lost due to rate-limiting (20000 allowed within 600 seconds) Nov 23 04:39:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:39:16 localhost podman[262563]: 2025-11-23 09:39:16.892031802 +0000 UTC m=+0.076102591 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:39:16 localhost podman[262563]: 2025-11-23 09:39:16.898424177 +0000 UTC m=+0.082495056 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:39:16 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:39:17 localhost podman[239764]: time="2025-11-23T09:39:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:39:17 localhost podman[239764]: @ - - [23/Nov/2025:09:39:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146334 "" "Go-http-client/1.1" Nov 23 04:39:17 localhost podman[239764]: @ - - [23/Nov/2025:09:39:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16742 "" "Go-http-client/1.1" Nov 23 04:39:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47302 DF PROTO=TCP SPT=51670 DPT=9102 SEQ=1959819014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B82CE150000000001030307) Nov 23 04:39:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47303 DF PROTO=TCP SPT=51670 DPT=9102 SEQ=1959819014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B82D20F0000000001030307) Nov 23 04:39:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:39:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30893 DF PROTO=TCP SPT=50818 DPT=9102 SEQ=3023898509 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B82D40F0000000001030307) Nov 23 04:39:20 localhost podman[262580]: 2025-11-23 09:39:20.893914734 +0000 UTC m=+0.082165955 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Nov 23 04:39:20 localhost podman[262580]: 2025-11-23 09:39:20.997536465 +0000 UTC m=+0.185787656 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:39:21 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:39:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:39:21 localhost systemd[1]: tmp-crun.1SLiUA.mount: Deactivated successfully. Nov 23 04:39:21 localhost podman[262605]: 2025-11-23 09:39:21.906152998 +0000 UTC m=+0.089662085 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 23 04:39:21 localhost podman[262605]: 2025-11-23 09:39:21.915061755 +0000 UTC m=+0.098570862 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:39:21 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:39:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47304 DF PROTO=TCP SPT=51670 DPT=9102 SEQ=1959819014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B82DA0F0000000001030307) Nov 23 04:39:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32341 DF PROTO=TCP SPT=54628 DPT=9102 SEQ=3554774580 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B82DE100000000001030307) Nov 23 04:39:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:39:25 localhost podman[262625]: 2025-11-23 09:39:25.893939149 +0000 UTC m=+0.080594454 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:39:25 localhost podman[262625]: 2025-11-23 09:39:25.905380896 +0000 UTC m=+0.092036191 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:39:25 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:39:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47305 DF PROTO=TCP SPT=51670 DPT=9102 SEQ=1959819014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B82E9CF0000000001030307) Nov 23 04:39:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:39:32 localhost podman[262649]: 2025-11-23 09:39:32.912626948 +0000 UTC m=+0.095030487 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., version=9.6, name=ubi9-minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git) Nov 23 04:39:32 localhost podman[262649]: 2025-11-23 09:39:32.954337925 +0000 UTC m=+0.136741484 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, architecture=x86_64, maintainer=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 04:39:32 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:39:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47306 DF PROTO=TCP SPT=51670 DPT=9102 SEQ=1959819014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B830A0F0000000001030307) Nov 23 04:39:36 localhost openstack_network_exporter[241732]: ERROR 09:39:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:39:36 localhost openstack_network_exporter[241732]: ERROR 09:39:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:39:36 localhost openstack_network_exporter[241732]: ERROR 09:39:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:39:36 localhost openstack_network_exporter[241732]: ERROR 09:39:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:39:36 localhost openstack_network_exporter[241732]: Nov 23 04:39:36 localhost openstack_network_exporter[241732]: ERROR 09:39:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:39:36 localhost openstack_network_exporter[241732]: Nov 23 04:39:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:39:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:39:40 localhost podman[262670]: 2025-11-23 09:39:40.900363008 +0000 UTC m=+0.082638750 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:39:40 localhost podman[262671]: 2025-11-23 09:39:40.946402044 +0000 UTC m=+0.124888524 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:39:40 localhost podman[262671]: 2025-11-23 09:39:40.958360097 +0000 UTC m=+0.136846587 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:39:40 localhost podman[262670]: 2025-11-23 09:39:40.966588601 +0000 UTC m=+0.148864353 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 23 04:39:40 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:39:40 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:39:47 localhost podman[239764]: time="2025-11-23T09:39:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:39:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:39:47 localhost podman[239764]: @ - - [23/Nov/2025:09:39:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146334 "" "Go-http-client/1.1" Nov 23 04:39:47 localhost podman[239764]: @ - - [23/Nov/2025:09:39:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16750 "" "Go-http-client/1.1" Nov 23 04:39:47 localhost podman[262712]: 2025-11-23 09:39:47.201005363 +0000 UTC m=+0.088373615 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:39:47 localhost podman[262712]: 2025-11-23 09:39:47.236365705 +0000 UTC m=+0.123733937 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Nov 23 04:39:47 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:39:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8373 DF PROTO=TCP SPT=35724 DPT=9102 SEQ=104193543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8343440000000001030307) Nov 23 04:39:49 localhost sshd[262731]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:39:49 localhost systemd-logind[760]: New session 59 of user zuul. Nov 23 04:39:49 localhost systemd[1]: Started Session 59 of User zuul. Nov 23 04:39:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8374 DF PROTO=TCP SPT=35724 DPT=9102 SEQ=104193543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B83474F0000000001030307) Nov 23 04:39:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47307 DF PROTO=TCP SPT=51670 DPT=9102 SEQ=1959819014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B834A0F0000000001030307) Nov 23 04:39:51 localhost python3.9[262842]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:39:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:39:51 localhost podman[262854]: 2025-11-23 09:39:51.89254783 +0000 UTC m=+0.071989549 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 04:39:51 localhost podman[262854]: 2025-11-23 09:39:51.972454248 +0000 UTC m=+0.151895977 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS) Nov 23 04:39:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:39:51 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:39:52 localhost podman[262889]: 2025-11-23 09:39:52.08343182 +0000 UTC m=+0.083471372 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 04:39:52 localhost podman[262889]: 2025-11-23 09:39:52.098260678 +0000 UTC m=+0.098300260 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118) Nov 23 04:39:52 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:39:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8375 DF PROTO=TCP SPT=35724 DPT=9102 SEQ=104193543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B834F4F0000000001030307) Nov 23 04:39:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30894 DF PROTO=TCP SPT=50818 DPT=9102 SEQ=3023898509 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B83520F0000000001030307) Nov 23 04:39:53 localhost python3.9[262998]: ansible-ansible.builtin.service_facts Invoked Nov 23 04:39:53 localhost network[263015]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 23 04:39:53 localhost network[263016]: 'network-scripts' will be removed from distribution in near future. Nov 23 04:39:53 localhost network[263017]: It is advised to switch to 'NetworkManager' instead for network management. Nov 23 04:39:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:39:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:39:56 localhost podman[263114]: 2025-11-23 09:39:56.041122833 +0000 UTC m=+0.080579981 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:39:56 localhost podman[263114]: 2025-11-23 09:39:56.078414566 +0000 UTC m=+0.117871664 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:39:56 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:39:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8376 DF PROTO=TCP SPT=35724 DPT=9102 SEQ=104193543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B835F0F0000000001030307) Nov 23 04:40:00 localhost python3.9[263273]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 23 04:40:00 localhost nova_compute[229707]: 2025-11-23 09:40:00.945 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:40:00 localhost nova_compute[229707]: 2025-11-23 09:40:00.947 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Nov 23 04:40:01 localhost python3.9[263336]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:40:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:40:03 localhost systemd[1]: tmp-crun.oXuSya.mount: Deactivated successfully. Nov 23 04:40:03 localhost podman[263339]: 2025-11-23 09:40:03.915698697 +0000 UTC m=+0.099317481 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, release=1755695350) Nov 23 04:40:03 localhost podman[263339]: 2025-11-23 09:40:03.957358415 +0000 UTC m=+0.140977199 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6) Nov 23 04:40:03 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:40:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8377 DF PROTO=TCP SPT=35724 DPT=9102 SEQ=104193543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B83800F0000000001030307) Nov 23 04:40:06 localhost nova_compute[229707]: 2025-11-23 09:40:06.003 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:40:06 localhost nova_compute[229707]: 2025-11-23 09:40:06.003 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:40:06 localhost python3.9[263470]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:40:06 localhost openstack_network_exporter[241732]: ERROR 09:40:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:40:06 localhost openstack_network_exporter[241732]: ERROR 09:40:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:40:06 localhost openstack_network_exporter[241732]: ERROR 09:40:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:40:06 localhost openstack_network_exporter[241732]: ERROR 09:40:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:40:06 localhost openstack_network_exporter[241732]: Nov 23 04:40:06 localhost openstack_network_exporter[241732]: ERROR 09:40:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:40:06 localhost openstack_network_exporter[241732]: Nov 23 04:40:07 localhost python3.9[263580]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:40:07 localhost nova_compute[229707]: 2025-11-23 09:40:07.943 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:40:07 localhost nova_compute[229707]: 2025-11-23 09:40:07.961 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:40:08 localhost nova_compute[229707]: 2025-11-23 09:40:08.945 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:40:08 localhost nova_compute[229707]: 2025-11-23 09:40:08.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:40:08 localhost nova_compute[229707]: 2025-11-23 09:40:08.946 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:40:08 localhost nova_compute[229707]: 2025-11-23 09:40:08.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:40:08 localhost nova_compute[229707]: 2025-11-23 09:40:08.968 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:40:08 localhost nova_compute[229707]: 2025-11-23 09:40:08.968 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:40:08 localhost nova_compute[229707]: 2025-11-23 09:40:08.969 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:40:08 localhost nova_compute[229707]: 2025-11-23 09:40:08.969 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:40:08 localhost nova_compute[229707]: 2025-11-23 09:40:08.970 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:40:08 localhost python3.9[263691]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:40:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:40:09.722 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:40:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:40:09.723 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:40:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:40:09.723 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:40:09 localhost nova_compute[229707]: 2025-11-23 09:40:09.798 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.829s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:40:09 localhost nova_compute[229707]: 2025-11-23 09:40:09.991 229711 WARNING nova.virt.libvirt.driver [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:40:09 localhost nova_compute[229707]: 2025-11-23 09:40:09.993 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12849MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:40:09 localhost nova_compute[229707]: 2025-11-23 09:40:09.994 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:40:09 localhost nova_compute[229707]: 2025-11-23 09:40:09.994 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:40:10 localhost python3.9[263825]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:40:10 localhost nova_compute[229707]: 2025-11-23 09:40:10.287 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:40:10 localhost nova_compute[229707]: 2025-11-23 09:40:10.288 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:40:10 localhost nova_compute[229707]: 2025-11-23 09:40:10.452 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Refreshing inventories for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 23 04:40:10 localhost nova_compute[229707]: 2025-11-23 09:40:10.739 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Updating ProviderTree inventory for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 23 04:40:10 localhost nova_compute[229707]: 2025-11-23 09:40:10.739 229711 DEBUG nova.compute.provider_tree [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Updating inventory in ProviderTree for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 23 04:40:10 localhost nova_compute[229707]: 2025-11-23 09:40:10.777 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Refreshing aggregate associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 23 04:40:10 localhost nova_compute[229707]: 2025-11-23 09:40:10.802 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Refreshing trait associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, traits: HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,HW_CPU_X86_AESNI,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 23 04:40:10 localhost nova_compute[229707]: 2025-11-23 09:40:10.826 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:40:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:40:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:40:11 localhost nova_compute[229707]: 2025-11-23 09:40:11.285 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:40:11 localhost nova_compute[229707]: 2025-11-23 09:40:11.291 229711 DEBUG nova.compute.provider_tree [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:40:11 localhost nova_compute[229707]: 2025-11-23 09:40:11.317 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:40:11 localhost nova_compute[229707]: 2025-11-23 09:40:11.320 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:40:11 localhost nova_compute[229707]: 2025-11-23 09:40:11.321 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:40:11 localhost nova_compute[229707]: 2025-11-23 09:40:11.321 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:40:11 localhost nova_compute[229707]: 2025-11-23 09:40:11.322 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Nov 23 04:40:11 localhost podman[263956]: 2025-11-23 09:40:11.325420971 +0000 UTC m=+0.099373433 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 04:40:11 localhost nova_compute[229707]: 2025-11-23 09:40:11.367 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Nov 23 04:40:11 localhost nova_compute[229707]: 2025-11-23 09:40:11.367 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:40:11 localhost podman[263957]: 2025-11-23 09:40:11.36257807 +0000 UTC m=+0.133133377 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:40:11 localhost podman[263956]: 2025-11-23 09:40:11.387931534 +0000 UTC m=+0.161884026 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118) Nov 23 04:40:11 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:40:11 localhost podman[263957]: 2025-11-23 09:40:11.400231714 +0000 UTC m=+0.170787051 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:40:11 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:40:11 localhost python3.9[263955]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:40:12 localhost python3.9[264108]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:40:13 localhost nova_compute[229707]: 2025-11-23 09:40:13.396 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:40:13 localhost nova_compute[229707]: 2025-11-23 09:40:13.396 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:40:13 localhost nova_compute[229707]: 2025-11-23 09:40:13.397 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:40:13 localhost nova_compute[229707]: 2025-11-23 09:40:13.397 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:40:13 localhost nova_compute[229707]: 2025-11-23 09:40:13.410 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:40:13 localhost python3.9[264220]: ansible-ansible.builtin.service_facts Invoked Nov 23 04:40:13 localhost network[264237]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 23 04:40:13 localhost network[264238]: 'network-scripts' will be removed from distribution in near future. Nov 23 04:40:13 localhost network[264239]: It is advised to switch to 'NetworkManager' instead for network management. Nov 23 04:40:13 localhost nova_compute[229707]: 2025-11-23 09:40:13.945 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:40:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:40:17 localhost podman[239764]: time="2025-11-23T09:40:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:40:17 localhost podman[239764]: @ - - [23/Nov/2025:09:40:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146334 "" "Go-http-client/1.1" Nov 23 04:40:17 localhost podman[239764]: @ - - [23/Nov/2025:09:40:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16746 "" "Go-http-client/1.1" Nov 23 04:40:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:40:17 localhost podman[264467]: 2025-11-23 09:40:17.89333766 +0000 UTC m=+0.077526858 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS) Nov 23 04:40:17 localhost podman[264467]: 2025-11-23 09:40:17.924089281 +0000 UTC m=+0.108278469 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Nov 23 04:40:17 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:40:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57110 DF PROTO=TCP SPT=42464 DPT=9102 SEQ=3679089704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B83B8740000000001030307) Nov 23 04:40:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57111 DF PROTO=TCP SPT=42464 DPT=9102 SEQ=3679089704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B83BC8F0000000001030307) Nov 23 04:40:20 localhost nova_compute[229707]: 2025-11-23 09:40:20.510 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:40:20 localhost python3.9[264578]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Nov 23 04:40:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8378 DF PROTO=TCP SPT=35724 DPT=9102 SEQ=104193543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B83C00F0000000001030307) Nov 23 04:40:21 localhost python3.9[264688]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Nov 23 04:40:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57112 DF PROTO=TCP SPT=42464 DPT=9102 SEQ=3679089704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B83C48F0000000001030307) Nov 23 04:40:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:40:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:40:22 localhost podman[264774]: 2025-11-23 09:40:22.895517863 +0000 UTC m=+0.080203759 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3) Nov 23 04:40:22 localhost podman[264774]: 2025-11-23 09:40:22.910698993 +0000 UTC m=+0.095384929 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3) Nov 23 04:40:22 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:40:23 localhost podman[264777]: 2025-11-23 09:40:23.001273313 +0000 UTC m=+0.182216694 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:40:23 localhost podman[264777]: 2025-11-23 09:40:23.064336123 +0000 UTC m=+0.245279484 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0) Nov 23 04:40:23 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:40:23 localhost python3.9[264828]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:40:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47308 DF PROTO=TCP SPT=51670 DPT=9102 SEQ=1959819014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B83C80F0000000001030307) Nov 23 04:40:23 localhost python3.9[264898]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:40:24 localhost python3.9[265008]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:40:25 localhost python3.9[265118]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:40:25 localhost python3.9[265228]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:40:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57113 DF PROTO=TCP SPT=42464 DPT=9102 SEQ=3679089704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B83D44F0000000001030307) Nov 23 04:40:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:40:26 localhost systemd[1]: tmp-crun.3XfKu0.mount: Deactivated successfully. Nov 23 04:40:26 localhost podman[265341]: 2025-11-23 09:40:26.670698024 +0000 UTC m=+0.090803898 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:40:26 localhost podman[265341]: 2025-11-23 09:40:26.68349993 +0000 UTC m=+0.103605794 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:40:26 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:40:26 localhost python3.9[265340]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:40:27 localhost python3.9[265473]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:40:28 localhost python3.9[265584]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:40:29 localhost python3.9[265694]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:40:29 localhost python3.9[265804]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:40:30 localhost python3.9[265914]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:40:31 localhost python3.9[266024]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:40:32 localhost python3.9[266134]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:40:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:40:34 localhost systemd[1]: tmp-crun.utrDE7.mount: Deactivated successfully. Nov 23 04:40:34 localhost podman[266247]: 2025-11-23 09:40:34.213317217 +0000 UTC m=+0.091990815 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1755695350, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.) Nov 23 04:40:34 localhost podman[266247]: 2025-11-23 09:40:34.22930591 +0000 UTC m=+0.107979538 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter) Nov 23 04:40:34 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:40:34 localhost python3.9[266246]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:40:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57114 DF PROTO=TCP SPT=42464 DPT=9102 SEQ=3679089704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B83F4100000000001030307) Nov 23 04:40:35 localhost python3.9[266376]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:40:35 localhost python3.9[266433]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:40:36 localhost python3.9[266543]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:40:36 localhost openstack_network_exporter[241732]: ERROR 09:40:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:40:36 localhost openstack_network_exporter[241732]: ERROR 09:40:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:40:36 localhost openstack_network_exporter[241732]: ERROR 09:40:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:40:36 localhost openstack_network_exporter[241732]: ERROR 09:40:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:40:36 localhost openstack_network_exporter[241732]: Nov 23 04:40:36 localhost openstack_network_exporter[241732]: ERROR 09:40:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:40:36 localhost openstack_network_exporter[241732]: Nov 23 04:40:36 localhost python3.9[266600]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:40:37 localhost python3.9[266710]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:40:39 localhost python3.9[266820]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:40:39 localhost python3.9[266877]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:40:40 localhost python3.9[266987]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:40:40 localhost python3.9[267044]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:40:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:40:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:40:41 localhost podman[267140]: 2025-11-23 09:40:41.911134186 +0000 UTC m=+0.089876020 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0) Nov 23 04:40:41 localhost podman[267140]: 2025-11-23 09:40:41.951381421 +0000 UTC m=+0.130123245 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:40:41 localhost systemd[1]: tmp-crun.idaTwk.mount: Deactivated successfully. Nov 23 04:40:41 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:40:41 localhost podman[267143]: 2025-11-23 09:40:41.966504558 +0000 UTC m=+0.142169436 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:40:41 localhost podman[267143]: 2025-11-23 09:40:41.974174295 +0000 UTC m=+0.149839153 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:40:41 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:40:42 localhost python3.9[267174]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:40:42 localhost systemd[1]: Reloading. Nov 23 04:40:42 localhost systemd-rc-local-generator[267226]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:40:42 localhost systemd-sysv-generator[267229]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:40:43 localhost python3.9[267346]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:40:43 localhost python3.9[267403]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:40:44 localhost python3.9[267513]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:40:44 localhost python3.9[267570]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:40:45 localhost python3.9[267680]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:40:45 localhost systemd[1]: Reloading. Nov 23 04:40:45 localhost systemd-rc-local-generator[267705]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:40:45 localhost systemd-sysv-generator[267711]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:40:45 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:40:45 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:40:45 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:40:45 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:40:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:40:45 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:40:45 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:40:45 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:40:45 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:40:46 localhost systemd[1]: Starting Create netns directory... Nov 23 04:40:46 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 23 04:40:46 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 23 04:40:46 localhost systemd[1]: Finished Create netns directory. Nov 23 04:40:47 localhost podman[239764]: time="2025-11-23T09:40:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:40:47 localhost podman[239764]: @ - - [23/Nov/2025:09:40:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146334 "" "Go-http-client/1.1" Nov 23 04:40:47 localhost podman[239764]: @ - - [23/Nov/2025:09:40:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16740 "" "Go-http-client/1.1" Nov 23 04:40:47 localhost python3.9[267832]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:40:47 localhost python3.9[267942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:40:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:40:48 localhost systemd[1]: tmp-crun.CVtlfr.mount: Deactivated successfully. Nov 23 04:40:48 localhost podman[268000]: 2025-11-23 09:40:48.235354262 +0000 UTC m=+0.089086606 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent) Nov 23 04:40:48 localhost podman[268000]: 2025-11-23 09:40:48.244314099 +0000 UTC m=+0.098046403 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 04:40:48 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:40:48 localhost python3.9[267999]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/multipathd/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/multipathd/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:40:49 localhost python3.9[268128]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:40:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62544 DF PROTO=TCP SPT=50824 DPT=9102 SEQ=42897196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B842DA40000000001030307) Nov 23 04:40:50 localhost python3.9[268238]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:40:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62545 DF PROTO=TCP SPT=50824 DPT=9102 SEQ=42897196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B84318F0000000001030307) Nov 23 04:40:50 localhost python3.9[268295]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/multipathd.json _original_basename=.wmhcqjj4 recurse=False state=file path=/var/lib/kolla/config_files/multipathd.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:40:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57115 DF PROTO=TCP SPT=42464 DPT=9102 SEQ=3679089704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8434100000000001030307) Nov 23 04:40:52 localhost python3.9[268405]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:40:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62546 DF PROTO=TCP SPT=50824 DPT=9102 SEQ=42897196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B84398F0000000001030307) Nov 23 04:40:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:40:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:40:53 localhost podman[268573]: 2025-11-23 09:40:53.42722009 +0000 UTC m=+0.085002579 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Nov 23 04:40:53 localhost podman[268573]: 2025-11-23 09:40:53.441271684 +0000 UTC m=+0.099054163 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251118) Nov 23 04:40:53 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:40:53 localhost systemd[1]: tmp-crun.OlBcVO.mount: Deactivated successfully. Nov 23 04:40:53 localhost podman[268574]: 2025-11-23 09:40:53.535750605 +0000 UTC m=+0.192952516 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:40:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8379 DF PROTO=TCP SPT=35724 DPT=9102 SEQ=104193543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B843E0F0000000001030307) Nov 23 04:40:53 localhost podman[268574]: 2025-11-23 09:40:53.64942218 +0000 UTC m=+0.306624121 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118) Nov 23 04:40:53 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:40:55 localhost python3.9[268727]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Nov 23 04:40:56 localhost python3.9[268837]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 23 04:40:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62547 DF PROTO=TCP SPT=50824 DPT=9102 SEQ=42897196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B84494F0000000001030307) Nov 23 04:40:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:40:56 localhost systemd[1]: tmp-crun.hIkvxJ.mount: Deactivated successfully. Nov 23 04:40:56 localhost podman[268878]: 2025-11-23 09:40:56.89786975 +0000 UTC m=+0.078693727 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:40:56 localhost podman[268878]: 2025-11-23 09:40:56.91037712 +0000 UTC m=+0.091201077 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:40:56 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:40:57 localhost python3.9[268970]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Nov 23 04:41:02 localhost python3[269107]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Nov 23 04:41:02 localhost python3[269107]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072",#012 "Digest": "sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-11-21T06:11:34.680484424Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251118",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 249489385,#012 "VirtualSize": 249489385,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068/diff:/var/lib/containers/storage/overlay/ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/4f2d68ffc7b0d2ad7154f194ce01f6add8f68d1c87ebccb7dfe58b78cf788c91/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/4f2d68ffc7b0d2ad7154f194ce01f6add8f68d1c87ebccb7dfe58b78cf788c91/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6",#012 "sha256:573e98f577c8b1610c1485067040ff856a142394fcd22ad4cb9c66b7d1de6bef",#012 "sha256:d9e3e9c6b6b086eeb756b403557bba77ecef73e97936fb3285a5484cd95a1b1a"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251118",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-18T01:56:49.795434035Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6d427dd138d2b0977a7ef7feaa8bd82d04e99cc5f4a16d555d6cff0cb52d43c6 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-18T01:56:49.795512415Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251118\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-18T01:56:52.547242013Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-11-21T06:10:01.947310748Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947327778Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947358359Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947372589Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.94738527Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.94739397Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:02.324930938Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:36.349393468Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:39.924297673Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:40.346524368Z",#012 Nov 23 04:41:03 localhost python3.9[269279]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:41:04 localhost python3.9[269391]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:41:04 localhost podman[269447]: 2025-11-23 09:41:04.420590896 +0000 UTC m=+0.083828244 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vcs-type=git, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal) Nov 23 04:41:04 localhost podman[269447]: 2025-11-23 09:41:04.434974629 +0000 UTC m=+0.098211967 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=) Nov 23 04:41:04 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:41:04 localhost python3.9[269446]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:41:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62548 DF PROTO=TCP SPT=50824 DPT=9102 SEQ=42897196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B846A100000000001030307) Nov 23 04:41:05 localhost python3.9[269573]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890864.6039674-1364-68097827550497/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:05 localhost python3.9[269628]: ansible-systemd Invoked with state=started name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:41:05 localhost nova_compute[229707]: 2025-11-23 09:41:05.964 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:41:05 localhost nova_compute[229707]: 2025-11-23 09:41:05.965 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:41:06 localhost openstack_network_exporter[241732]: ERROR 09:41:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:41:06 localhost openstack_network_exporter[241732]: ERROR 09:41:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:41:06 localhost openstack_network_exporter[241732]: ERROR 09:41:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:41:06 localhost openstack_network_exporter[241732]: ERROR 09:41:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:41:06 localhost openstack_network_exporter[241732]: Nov 23 04:41:06 localhost openstack_network_exporter[241732]: ERROR 09:41:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:41:06 localhost openstack_network_exporter[241732]: Nov 23 04:41:07 localhost nova_compute[229707]: 2025-11-23 09:41:07.945 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:41:08 localhost python3.9[269738]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:41:09 localhost python3.9[269848]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:41:09.722 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:41:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:41:09.723 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:41:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:41:09.723 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:41:09 localhost nova_compute[229707]: 2025-11-23 09:41:09.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:41:10 localhost nova_compute[229707]: 2025-11-23 09:41:10.945 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:41:10 localhost nova_compute[229707]: 2025-11-23 09:41:10.946 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:41:10 localhost nova_compute[229707]: 2025-11-23 09:41:10.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:41:10 localhost nova_compute[229707]: 2025-11-23 09:41:10.964 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:41:10 localhost nova_compute[229707]: 2025-11-23 09:41:10.964 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:41:10 localhost nova_compute[229707]: 2025-11-23 09:41:10.964 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:41:10 localhost nova_compute[229707]: 2025-11-23 09:41:10.965 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:41:10 localhost nova_compute[229707]: 2025-11-23 09:41:10.965 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:41:11 localhost python3.9[269958]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Nov 23 04:41:11 localhost nova_compute[229707]: 2025-11-23 09:41:11.423 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:41:11 localhost nova_compute[229707]: 2025-11-23 09:41:11.589 229711 WARNING nova.virt.libvirt.driver [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:41:11 localhost nova_compute[229707]: 2025-11-23 09:41:11.590 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12836MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:41:11 localhost nova_compute[229707]: 2025-11-23 09:41:11.590 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:41:11 localhost nova_compute[229707]: 2025-11-23 09:41:11.590 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:41:11 localhost python3.9[270090]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Nov 23 04:41:11 localhost nova_compute[229707]: 2025-11-23 09:41:11.652 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:41:11 localhost nova_compute[229707]: 2025-11-23 09:41:11.652 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:41:11 localhost nova_compute[229707]: 2025-11-23 09:41:11.671 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:41:12 localhost nova_compute[229707]: 2025-11-23 09:41:12.135 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:41:12 localhost nova_compute[229707]: 2025-11-23 09:41:12.141 229711 DEBUG nova.compute.provider_tree [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:41:12 localhost nova_compute[229707]: 2025-11-23 09:41:12.161 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:41:12 localhost nova_compute[229707]: 2025-11-23 09:41:12.163 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:41:12 localhost nova_compute[229707]: 2025-11-23 09:41:12.164 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:41:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:41:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:41:12 localhost systemd[1]: tmp-crun.6zKWt5.mount: Deactivated successfully. Nov 23 04:41:12 localhost podman[270224]: 2025-11-23 09:41:12.280252995 +0000 UTC m=+0.088213760 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:41:12 localhost podman[270224]: 2025-11-23 09:41:12.291386494 +0000 UTC m=+0.099347289 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:41:12 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:41:12 localhost podman[270223]: 2025-11-23 09:41:12.339440632 +0000 UTC m=+0.147842921 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3) Nov 23 04:41:12 localhost podman[270223]: 2025-11-23 09:41:12.352362553 +0000 UTC m=+0.160764902 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_compute) Nov 23 04:41:12 localhost python3.9[270222]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:41:12 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:41:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:41:12 localhost python3.9[270321]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:13 localhost nova_compute[229707]: 2025-11-23 09:41:13.160 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:41:13 localhost nova_compute[229707]: 2025-11-23 09:41:13.161 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:41:13 localhost nova_compute[229707]: 2025-11-23 09:41:13.161 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:41:13 localhost nova_compute[229707]: 2025-11-23 09:41:13.162 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:41:13 localhost nova_compute[229707]: 2025-11-23 09:41:13.182 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:41:13 localhost python3.9[270431]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:14 localhost python3.9[270541]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 23 04:41:14 localhost nova_compute[229707]: 2025-11-23 09:41:14.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:41:17 localhost podman[239764]: time="2025-11-23T09:41:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:41:17 localhost podman[239764]: @ - - [23/Nov/2025:09:41:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146334 "" "Go-http-client/1.1" Nov 23 04:41:17 localhost podman[239764]: @ - - [23/Nov/2025:09:41:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16745 "" "Go-http-client/1.1" Nov 23 04:41:18 localhost python3.9[270775]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 23 04:41:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:41:18 localhost systemd[1]: tmp-crun.phhtSw.mount: Deactivated successfully. Nov 23 04:41:18 localhost podman[270780]: 2025-11-23 09:41:18.920704711 +0000 UTC m=+0.104291762 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118) Nov 23 04:41:18 localhost podman[270780]: 2025-11-23 09:41:18.951627948 +0000 UTC m=+0.135214989 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 23 04:41:18 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:41:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61565 DF PROTO=TCP SPT=46242 DPT=9102 SEQ=2467884391 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B84A2D40000000001030307) Nov 23 04:41:19 localhost python3.9[270909]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61566 DF PROTO=TCP SPT=46242 DPT=9102 SEQ=2467884391 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B84A6D00000000001030307) Nov 23 04:41:20 localhost python3.9[271019]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:41:20 localhost systemd[1]: Reloading. Nov 23 04:41:21 localhost systemd-rc-local-generator[271060]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:41:21 localhost systemd-sysv-generator[271066]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:41:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62549 DF PROTO=TCP SPT=50824 DPT=9102 SEQ=42897196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B84AA100000000001030307) Nov 23 04:41:22 localhost python3.9[271180]: ansible-ansible.builtin.service_facts Invoked Nov 23 04:41:22 localhost network[271197]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 23 04:41:22 localhost network[271198]: 'network-scripts' will be removed from distribution in near future. Nov 23 04:41:22 localhost network[271199]: It is advised to switch to 'NetworkManager' instead for network management. Nov 23 04:41:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61567 DF PROTO=TCP SPT=46242 DPT=9102 SEQ=2467884391 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B84AECF0000000001030307) Nov 23 04:41:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57116 DF PROTO=TCP SPT=42464 DPT=9102 SEQ=3679089704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B84B20F0000000001030307) Nov 23 04:41:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:41:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:41:23 localhost podman[271248]: 2025-11-23 09:41:23.596299895 +0000 UTC m=+0.092989779 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible) Nov 23 04:41:23 localhost podman[271248]: 2025-11-23 09:41:23.611295664 +0000 UTC m=+0.107985568 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:41:23 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:41:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:41:23 localhost podman[271277]: 2025-11-23 09:41:23.766780362 +0000 UTC m=+0.068923576 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 04:41:23 localhost podman[271277]: 2025-11-23 09:41:23.887466273 +0000 UTC m=+0.189609547 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true) Nov 23 04:41:23 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:41:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61568 DF PROTO=TCP SPT=46242 DPT=9102 SEQ=2467884391 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B84BE900000000001030307) Nov 23 04:41:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:41:27 localhost systemd[1]: tmp-crun.0fhapS.mount: Deactivated successfully. Nov 23 04:41:27 localhost podman[271477]: 2025-11-23 09:41:27.900502069 +0000 UTC m=+0.088894851 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:41:27 localhost podman[271477]: 2025-11-23 09:41:27.911615767 +0000 UTC m=+0.100008619 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:41:27 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:41:28 localhost python3.9[271483]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:41:28 localhost python3.9[271611]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:41:30 localhost python3.9[271722]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:41:31 localhost python3.9[271833]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:41:32 localhost python3.9[271944]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:41:33 localhost python3.9[272055]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:41:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61569 DF PROTO=TCP SPT=46242 DPT=9102 SEQ=2467884391 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B84DE100000000001030307) Nov 23 04:41:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:41:34 localhost systemd[1]: tmp-crun.1DaVC2.mount: Deactivated successfully. Nov 23 04:41:34 localhost podman[272057]: 2025-11-23 09:41:34.923076271 +0000 UTC m=+0.096815237 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container) Nov 23 04:41:34 localhost podman[272057]: 2025-11-23 09:41:34.961474773 +0000 UTC m=+0.135213749 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, version=9.6, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container) Nov 23 04:41:34 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:41:35 localhost python3.9[272186]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:41:36 localhost python3.9[272297]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:41:36 localhost openstack_network_exporter[241732]: ERROR 09:41:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:41:36 localhost openstack_network_exporter[241732]: ERROR 09:41:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:41:36 localhost openstack_network_exporter[241732]: ERROR 09:41:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:41:36 localhost openstack_network_exporter[241732]: ERROR 09:41:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:41:36 localhost openstack_network_exporter[241732]: Nov 23 04:41:36 localhost openstack_network_exporter[241732]: ERROR 09:41:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:41:36 localhost openstack_network_exporter[241732]: Nov 23 04:41:37 localhost python3.9[272408]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:37 localhost python3.9[272518]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:38 localhost python3.9[272628]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:39 localhost python3.9[272738]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:40 localhost python3.9[272848]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:40 localhost python3.9[272958]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:41:42 localhost systemd[1]: tmp-crun.bODP3L.mount: Deactivated successfully. Nov 23 04:41:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:41:42 localhost podman[273068]: 2025-11-23 09:41:42.457324695 +0000 UTC m=+0.087799208 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:41:42 localhost podman[273068]: 2025-11-23 09:41:42.489490548 +0000 UTC m=+0.119965081 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:41:42 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:41:42 localhost python3.9[273069]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:42 localhost podman[273085]: 2025-11-23 09:41:42.58932506 +0000 UTC m=+0.122373879 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Nov 23 04:41:42 localhost podman[273085]: 2025-11-23 09:41:42.619671541 +0000 UTC m=+0.152720240 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 23 04:41:42 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:41:43 localhost python3.9[273219]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:44 localhost python3.9[273329]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:45 localhost python3.9[273439]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:45 localhost python3.9[273549]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:46 localhost python3.9[273659]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:46 localhost python3.9[273769]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:47 localhost podman[239764]: time="2025-11-23T09:41:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:41:47 localhost podman[239764]: @ - - [23/Nov/2025:09:41:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146334 "" "Go-http-client/1.1" Nov 23 04:41:47 localhost podman[239764]: @ - - [23/Nov/2025:09:41:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16744 "" "Go-http-client/1.1" Nov 23 04:41:47 localhost python3.9[273879]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:48 localhost python3.9[273989]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:48 localhost python3.9[274099]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:41:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54559 DF PROTO=TCP SPT=55260 DPT=9102 SEQ=1096244275 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8518040000000001030307) Nov 23 04:41:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:41:49 localhost podman[274210]: 2025-11-23 09:41:49.591071745 +0000 UTC m=+0.082099766 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:41:49 localhost podman[274210]: 2025-11-23 09:41:49.596253544 +0000 UTC m=+0.087281565 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:41:49 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:41:49 localhost python3.9[274209]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:41:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54560 DF PROTO=TCP SPT=55260 DPT=9102 SEQ=1096244275 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B851C0F0000000001030307) Nov 23 04:41:50 localhost python3.9[274336]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 23 04:41:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61570 DF PROTO=TCP SPT=46242 DPT=9102 SEQ=2467884391 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B851E100000000001030307) Nov 23 04:41:51 localhost python3.9[274446]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 23 04:41:51 localhost systemd[1]: Reloading. Nov 23 04:41:51 localhost systemd-sysv-generator[274472]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:41:51 localhost systemd-rc-local-generator[274468]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:41:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54561 DF PROTO=TCP SPT=55260 DPT=9102 SEQ=1096244275 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8524100000000001030307) Nov 23 04:41:52 localhost python3.9[274592]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:41:53 localhost python3.9[274703]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:41:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62550 DF PROTO=TCP SPT=50824 DPT=9102 SEQ=42897196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B85280F0000000001030307) Nov 23 04:41:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:41:53 localhost systemd[1]: tmp-crun.Sn4knA.mount: Deactivated successfully. Nov 23 04:41:53 localhost podman[274705]: 2025-11-23 09:41:53.903767121 +0000 UTC m=+0.085712508 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd) Nov 23 04:41:53 localhost podman[274705]: 2025-11-23 09:41:53.914944442 +0000 UTC m=+0.096889849 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 04:41:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:41:53 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:41:54 localhost podman[274723]: 2025-11-23 09:41:54.017098022 +0000 UTC m=+0.078798431 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller) Nov 23 04:41:54 localhost podman[274723]: 2025-11-23 09:41:54.085500753 +0000 UTC m=+0.147201182 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:41:54 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:41:55 localhost python3.9[274859]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:41:56 localhost python3.9[274970]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:41:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54562 DF PROTO=TCP SPT=55260 DPT=9102 SEQ=1096244275 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8533CF0000000001030307) Nov 23 04:41:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:41:58 localhost podman[275082]: 2025-11-23 09:41:58.64474288 +0000 UTC m=+0.078989567 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:41:58 localhost podman[275082]: 2025-11-23 09:41:58.657124425 +0000 UTC m=+0.091371102 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:41:58 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:41:58 localhost python3.9[275081]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:41:59 localhost python3.9[275215]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:42:00 localhost sshd[275217]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:42:00 localhost python3.9[275328]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:42:01 localhost python3.9[275439]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:42:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 04:42:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 5370 writes, 23K keys, 5370 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5370 writes, 735 syncs, 7.31 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 04:42:03 localhost python3.9[275550]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:04 localhost python3.9[275660]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54563 DF PROTO=TCP SPT=55260 DPT=9102 SEQ=1096244275 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8554100000000001030307) Nov 23 04:42:05 localhost python3.9[275770]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:42:05 localhost podman[275881]: 2025-11-23 09:42:05.706014528 +0000 UTC m=+0.080961556 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Nov 23 04:42:05 localhost podman[275881]: 2025-11-23 09:42:05.742227625 +0000 UTC m=+0.117174593 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, architecture=x86_64, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350) Nov 23 04:42:05 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:42:05 localhost python3.9[275880]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:05 localhost nova_compute[229707]: 2025-11-23 09:42:05.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:42:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 04:42:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 5205 writes, 23K keys, 5205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5205 writes, 665 syncs, 7.83 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 04:42:06 localhost python3.9[276008]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:06 localhost openstack_network_exporter[241732]: ERROR 09:42:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:42:06 localhost openstack_network_exporter[241732]: ERROR 09:42:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:42:06 localhost openstack_network_exporter[241732]: ERROR 09:42:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:42:06 localhost openstack_network_exporter[241732]: ERROR 09:42:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:42:06 localhost openstack_network_exporter[241732]: Nov 23 04:42:06 localhost openstack_network_exporter[241732]: ERROR 09:42:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:42:06 localhost openstack_network_exporter[241732]: Nov 23 04:42:06 localhost nova_compute[229707]: 2025-11-23 09:42:06.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:42:07 localhost python3.9[276118]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:07 localhost python3.9[276228]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:07 localhost nova_compute[229707]: 2025-11-23 09:42:07.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:42:08 localhost python3.9[276338]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:09 localhost python3.9[276448]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:42:09.723 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:42:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:42:09.724 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:42:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:42:09.724 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:42:10 localhost nova_compute[229707]: 2025-11-23 09:42:10.946 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:42:11 localhost python3.9[276558]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:11 localhost nova_compute[229707]: 2025-11-23 09:42:11.942 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:42:11 localhost nova_compute[229707]: 2025-11-23 09:42:11.966 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:42:11 localhost nova_compute[229707]: 2025-11-23 09:42:11.981 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:42:11 localhost nova_compute[229707]: 2025-11-23 09:42:11.981 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:42:11 localhost nova_compute[229707]: 2025-11-23 09:42:11.982 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:42:11 localhost nova_compute[229707]: 2025-11-23 09:42:11.982 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:42:11 localhost nova_compute[229707]: 2025-11-23 09:42:11.983 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:42:12 localhost nova_compute[229707]: 2025-11-23 09:42:12.512 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:42:12 localhost nova_compute[229707]: 2025-11-23 09:42:12.670 229711 WARNING nova.virt.libvirt.driver [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:42:12 localhost nova_compute[229707]: 2025-11-23 09:42:12.671 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12840MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:42:12 localhost nova_compute[229707]: 2025-11-23 09:42:12.672 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:42:12 localhost nova_compute[229707]: 2025-11-23 09:42:12.672 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:42:12 localhost nova_compute[229707]: 2025-11-23 09:42:12.740 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:42:12 localhost nova_compute[229707]: 2025-11-23 09:42:12.741 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:42:12 localhost nova_compute[229707]: 2025-11-23 09:42:12.784 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:42:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:42:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:42:12 localhost podman[276599]: 2025-11-23 09:42:12.923380138 +0000 UTC m=+0.108576728 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 04:42:12 localhost systemd[1]: tmp-crun.7gOGLI.mount: Deactivated successfully. Nov 23 04:42:12 localhost podman[276600]: 2025-11-23 09:42:12.962899967 +0000 UTC m=+0.144860267 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:42:12 localhost podman[276600]: 2025-11-23 09:42:12.968350605 +0000 UTC m=+0.150310885 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:42:12 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:42:12 localhost podman[276599]: 2025-11-23 09:42:12.985171623 +0000 UTC m=+0.170368213 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible) Nov 23 04:42:12 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:42:13 localhost nova_compute[229707]: 2025-11-23 09:42:13.238 229711 DEBUG oslo_concurrency.processutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:42:13 localhost nova_compute[229707]: 2025-11-23 09:42:13.245 229711 DEBUG nova.compute.provider_tree [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:42:13 localhost nova_compute[229707]: 2025-11-23 09:42:13.263 229711 DEBUG nova.scheduler.client.report [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:42:13 localhost nova_compute[229707]: 2025-11-23 09:42:13.265 229711 DEBUG nova.compute.resource_tracker [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:42:13 localhost nova_compute[229707]: 2025-11-23 09:42:13.265 229711 DEBUG oslo_concurrency.lockutils [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:42:14 localhost nova_compute[229707]: 2025-11-23 09:42:14.244 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:42:14 localhost nova_compute[229707]: 2025-11-23 09:42:14.245 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:42:14 localhost nova_compute[229707]: 2025-11-23 09:42:14.245 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:42:14 localhost nova_compute[229707]: 2025-11-23 09:42:14.246 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:42:14 localhost nova_compute[229707]: 2025-11-23 09:42:14.263 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:42:14 localhost nova_compute[229707]: 2025-11-23 09:42:14.263 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:42:14 localhost nova_compute[229707]: 2025-11-23 09:42:14.264 229711 DEBUG nova.compute.manager [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:42:16 localhost nova_compute[229707]: 2025-11-23 09:42:16.947 229711 DEBUG oslo_service.periodic_task [None req-8a7767f0-4b11-41cd-b868-b59ee124c5c5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:42:17 localhost podman[239764]: time="2025-11-23T09:42:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:42:17 localhost podman[239764]: @ - - [23/Nov/2025:09:42:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146334 "" "Go-http-client/1.1" Nov 23 04:42:17 localhost podman[239764]: @ - - [23/Nov/2025:09:42:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16750 "" "Go-http-client/1.1" Nov 23 04:42:18 localhost python3.9[276755]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Nov 23 04:42:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39358 DF PROTO=TCP SPT=44788 DPT=9102 SEQ=2083171469 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B858D330000000001030307) Nov 23 04:42:19 localhost sshd[276774]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:42:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:42:19 localhost systemd-logind[760]: New session 60 of user zuul. Nov 23 04:42:19 localhost systemd[1]: Started Session 60 of User zuul. Nov 23 04:42:19 localhost podman[276776]: 2025-11-23 09:42:19.876033356 +0000 UTC m=+0.070836834 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:42:19 localhost podman[276776]: 2025-11-23 09:42:19.910366816 +0000 UTC m=+0.105170334 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 04:42:19 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:42:19 localhost systemd[1]: session-60.scope: Deactivated successfully. Nov 23 04:42:19 localhost systemd-logind[760]: Session 60 logged out. Waiting for processes to exit. Nov 23 04:42:19 localhost systemd-logind[760]: Removed session 60. Nov 23 04:42:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39359 DF PROTO=TCP SPT=44788 DPT=9102 SEQ=2083171469 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8591500000000001030307) Nov 23 04:42:20 localhost python3.9[276902]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:42:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54564 DF PROTO=TCP SPT=55260 DPT=9102 SEQ=1096244275 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B85940F0000000001030307) Nov 23 04:42:21 localhost python3.9[276988]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890940.2173753-3037-212250233521354/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:21 localhost python3.9[277146]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:42:22 localhost python3.9[277218]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39360 DF PROTO=TCP SPT=44788 DPT=9102 SEQ=2083171469 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8599500000000001030307) Nov 23 04:42:22 localhost python3.9[277326]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:42:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61571 DF PROTO=TCP SPT=46242 DPT=9102 SEQ=2467884391 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B859C100000000001030307) Nov 23 04:42:23 localhost python3.9[277412]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890942.3073575-3037-11290450500295/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:42:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:42:24 localhost podman[277521]: 2025-11-23 09:42:24.903705017 +0000 UTC m=+0.081934417 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 04:42:24 localhost podman[277521]: 2025-11-23 09:42:24.947388544 +0000 UTC m=+0.125617904 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 23 04:42:24 localhost podman[277522]: 2025-11-23 09:42:24.956964028 +0000 UTC m=+0.135484627 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:42:24 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:42:25 localhost python3.9[277520]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:42:25 localhost podman[277522]: 2025-11-23 09:42:25.048353106 +0000 UTC m=+0.226873645 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 04:42:25 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:42:25 localhost python3.9[277666]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890944.2811081-3037-202284950227009/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=4dc3e49f3c2a74cce1eec3b31509c1b3c95ac5ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:26 localhost python3.9[277774]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:42:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39361 DF PROTO=TCP SPT=44788 DPT=9102 SEQ=2083171469 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B85A90F0000000001030307) Nov 23 04:42:27 localhost python3.9[277860]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890945.653513-3037-189604358533394/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:27 localhost python3.9[277968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:42:28 localhost python3.9[278054]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890947.5289798-3037-84146248684587/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:42:28 localhost podman[278113]: 2025-11-23 09:42:28.905834024 +0000 UTC m=+0.091270365 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:42:28 localhost podman[278113]: 2025-11-23 09:42:28.916236635 +0000 UTC m=+0.101672926 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:42:28 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:42:29 localhost python3.9[278185]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:42:29 localhost python3.9[278295]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:42:30 localhost python3.9[278405]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:42:31 localhost python3.9[278517]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:42:32 localhost python3.9[278625]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:42:32 localhost python3.9[278735]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:42:33 localhost python3.9[278790]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute.json _original_basename=nova_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:34 localhost python3.9[278898]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 23 04:42:34 localhost python3.9[278953]: ansible-ansible.legacy.file Invoked with mode=0700 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute_init.json _original_basename=nova_compute_init.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute_init.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 23 04:42:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39362 DF PROTO=TCP SPT=44788 DPT=9102 SEQ=2083171469 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B85CA0F0000000001030307) Nov 23 04:42:35 localhost python3.9[279063]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Nov 23 04:42:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:42:35 localhost systemd[1]: tmp-crun.01wdiE.mount: Deactivated successfully. Nov 23 04:42:35 localhost podman[279136]: 2025-11-23 09:42:35.89796161 +0000 UTC m=+0.081762442 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vendor=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container) Nov 23 04:42:35 localhost podman[279136]: 2025-11-23 09:42:35.911285699 +0000 UTC m=+0.095086441 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Nov 23 04:42:35 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:42:36 localhost python3.9[279192]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 23 04:42:36 localhost openstack_network_exporter[241732]: ERROR 09:42:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:42:36 localhost openstack_network_exporter[241732]: ERROR 09:42:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:42:36 localhost openstack_network_exporter[241732]: ERROR 09:42:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:42:36 localhost openstack_network_exporter[241732]: ERROR 09:42:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:42:36 localhost openstack_network_exporter[241732]: Nov 23 04:42:36 localhost openstack_network_exporter[241732]: ERROR 09:42:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:42:36 localhost openstack_network_exporter[241732]: Nov 23 04:42:37 localhost python3[279302]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Nov 23 04:42:37 localhost python3[279302]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66",#012 "Digest": "sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-11-21T06:33:31.011385583Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251118",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211770748,#012 "VirtualSize": 1211770748,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238/diff:/var/lib/containers/storage/overlay/0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c/diff:/var/lib/containers/storage/overlay/6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068/diff:/var/lib/containers/storage/overlay/ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6",#012 "sha256:573e98f577c8b1610c1485067040ff856a142394fcd22ad4cb9c66b7d1de6bef",#012 "sha256:5a71e5d7d31f15255619cb8b9384b708744757c93993652418b0f45b0c0931d5",#012 "sha256:b9b598b1eb0c08906fe1bc9a64fc0e72719a6197d83669d2eb4309e69a00aa62",#012 "sha256:33e3811ab7487b27336fdf94252d5a875b17efb438cbc4ffc943f851ad3eceb6"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251118",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-18T01:56:49.795434035Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6d427dd138d2b0977a7ef7feaa8bd82d04e99cc5f4a16d555d6cff0cb52d43c6 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-18T01:56:49.795512415Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251118\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-18T01:56:52.547242013Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-11-21T06:10:01.947310748Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947327778Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947358359Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947372589Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.94738527Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.94739397Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:02.324930938Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:36.349393468Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Nov 23 04:42:38 localhost python3.9[279473]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:42:39 localhost python3.9[279585]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Nov 23 04:42:40 localhost python3.9[279695]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 23 04:42:42 localhost python3[279805]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Nov 23 04:42:42 localhost python3[279805]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66",#012 "Digest": "sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-11-21T06:33:31.011385583Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251118",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211770748,#012 "VirtualSize": 1211770748,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238/diff:/var/lib/containers/storage/overlay/0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c/diff:/var/lib/containers/storage/overlay/6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068/diff:/var/lib/containers/storage/overlay/ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6",#012 "sha256:573e98f577c8b1610c1485067040ff856a142394fcd22ad4cb9c66b7d1de6bef",#012 "sha256:5a71e5d7d31f15255619cb8b9384b708744757c93993652418b0f45b0c0931d5",#012 "sha256:b9b598b1eb0c08906fe1bc9a64fc0e72719a6197d83669d2eb4309e69a00aa62",#012 "sha256:33e3811ab7487b27336fdf94252d5a875b17efb438cbc4ffc943f851ad3eceb6"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251118",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-18T01:56:49.795434035Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6d427dd138d2b0977a7ef7feaa8bd82d04e99cc5f4a16d555d6cff0cb52d43c6 in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-18T01:56:49.795512415Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251118\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-18T01:56:52.547242013Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-11-21T06:10:01.947310748Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947327778Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947358359Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.947372589Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.94738527Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:01.94739397Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:02.324930938Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-21T06:10:36.349393468Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Nov 23 04:42:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:42:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:42:43 localhost systemd[1]: tmp-crun.8YeGas.mount: Deactivated successfully. Nov 23 04:42:43 localhost podman[279977]: 2025-11-23 09:42:43.464878653 +0000 UTC m=+0.096402363 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:42:43 localhost podman[279977]: 2025-11-23 09:42:43.477403739 +0000 UTC m=+0.108927449 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 23 04:42:43 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:42:43 localhost python3.9[279976]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:42:43 localhost podman[279978]: 2025-11-23 09:42:43.555690512 +0000 UTC m=+0.187240873 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:42:43 localhost podman[279978]: 2025-11-23 09:42:43.568463826 +0000 UTC m=+0.200014527 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:42:43 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:42:44 localhost python3.9[280128]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:42:45 localhost python3.9[280237]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890964.4632292-3715-47382201331045/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:42:45 localhost python3.9[280292]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:42:46 localhost python3.9[280402]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:42:47 localhost podman[239764]: time="2025-11-23T09:42:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:42:47 localhost podman[239764]: @ - - [23/Nov/2025:09:42:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146334 "" "Go-http-client/1.1" Nov 23 04:42:47 localhost podman[239764]: @ - - [23/Nov/2025:09:42:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16742 "" "Go-http-client/1.1" Nov 23 04:42:47 localhost python3.9[280510]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:42:49 localhost python3.9[280618]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 23 04:42:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63085 DF PROTO=TCP SPT=54012 DPT=9102 SEQ=614466098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8602640000000001030307) Nov 23 04:42:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:42:50 localhost podman[280728]: 2025-11-23 09:42:50.309128582 +0000 UTC m=+0.109779165 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 23 04:42:50 localhost podman[280728]: 2025-11-23 09:42:50.343387828 +0000 UTC m=+0.144038381 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:42:50 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:42:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63086 DF PROTO=TCP SPT=54012 DPT=9102 SEQ=614466098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B86064F0000000001030307) Nov 23 04:42:50 localhost python3.9[280729]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Nov 23 04:42:50 localhost systemd-journald[47422]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 119.8 (399 of 333 items), suggesting rotation. Nov 23 04:42:50 localhost systemd-journald[47422]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 23 04:42:50 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 04:42:50 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 04:42:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39363 DF PROTO=TCP SPT=44788 DPT=9102 SEQ=2083171469 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B860A0F0000000001030307) Nov 23 04:42:51 localhost python3.9[280880]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 23 04:42:51 localhost systemd[1]: Stopping nova_compute container... Nov 23 04:42:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63087 DF PROTO=TCP SPT=54012 DPT=9102 SEQ=614466098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B860E4F0000000001030307) Nov 23 04:42:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54565 DF PROTO=TCP SPT=55260 DPT=9102 SEQ=1096244275 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8612100000000001030307) Nov 23 04:42:53 localhost nova_compute[229707]: 2025-11-23 09:42:53.538 229711 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Nov 23 04:42:53 localhost nova_compute[229707]: 2025-11-23 09:42:53.540 229711 DEBUG oslo_concurrency.lockutils [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 23 04:42:53 localhost nova_compute[229707]: 2025-11-23 09:42:53.540 229711 DEBUG oslo_concurrency.lockutils [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 23 04:42:53 localhost nova_compute[229707]: 2025-11-23 09:42:53.541 229711 DEBUG oslo_concurrency.lockutils [None req-f6181854-14d6-4510-b4ac-03b50b58e46f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 23 04:42:53 localhost journal[229251]: End of file while reading data: Input/output error Nov 23 04:42:53 localhost systemd[1]: libpod-e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2.scope: Deactivated successfully. Nov 23 04:42:53 localhost systemd[1]: libpod-e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2.scope: Consumed 17.448s CPU time. Nov 23 04:42:53 localhost podman[280884]: 2025-11-23 09:42:53.878057865 +0000 UTC m=+2.406103740 container died e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 23 04:42:53 localhost systemd[1]: tmp-crun.4nWIvX.mount: Deactivated successfully. Nov 23 04:42:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2-userdata-shm.mount: Deactivated successfully. Nov 23 04:42:53 localhost systemd[1]: var-lib-containers-storage-overlay-bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457-merged.mount: Deactivated successfully. Nov 23 04:42:54 localhost podman[280884]: 2025-11-23 09:42:54.02680693 +0000 UTC m=+2.554852825 container cleanup e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=edpm, container_name=nova_compute) Nov 23 04:42:54 localhost podman[280884]: nova_compute Nov 23 04:42:54 localhost podman[280923]: error opening file `/run/crun/e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2/status`: No such file or directory Nov 23 04:42:54 localhost podman[280912]: 2025-11-23 09:42:54.095537699 +0000 UTC m=+0.043857243 container cleanup e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 23 04:42:54 localhost podman[280912]: nova_compute Nov 23 04:42:54 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Nov 23 04:42:54 localhost systemd[1]: Stopped nova_compute container. Nov 23 04:42:54 localhost systemd[1]: Starting nova_compute container... Nov 23 04:42:54 localhost systemd[1]: Started libcrun container. Nov 23 04:42:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Nov 23 04:42:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Nov 23 04:42:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Nov 23 04:42:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 23 04:42:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb70788a6f83add70ad3ec1ada2a3dc217bafb657f8f278fd41f0ad821fd8457/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 23 04:42:54 localhost podman[280925]: 2025-11-23 09:42:54.249694081 +0000 UTC m=+0.123224530 container init e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Nov 23 04:42:54 localhost podman[280925]: 2025-11-23 09:42:54.258412869 +0000 UTC m=+0.131943318 container start e88ae9d0afe2ed9bfd718be82ebcb066b3c331028cf8b2d46607280183b3b9f2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251118) Nov 23 04:42:54 localhost podman[280925]: nova_compute Nov 23 04:42:54 localhost nova_compute[280939]: + sudo -E kolla_set_configs Nov 23 04:42:54 localhost systemd[1]: Started nova_compute container. Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Validating config file Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Copying service configuration files Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Deleting /etc/nova/nova.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /etc/nova/nova.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Deleting /etc/ceph Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Creating directory /etc/ceph Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /etc/ceph Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Deleting /usr/sbin/iscsiadm Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Writing out command to execute Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Nov 23 04:42:54 localhost nova_compute[280939]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Nov 23 04:42:54 localhost nova_compute[280939]: ++ cat /run_command Nov 23 04:42:54 localhost nova_compute[280939]: + CMD=nova-compute Nov 23 04:42:54 localhost nova_compute[280939]: + ARGS= Nov 23 04:42:54 localhost nova_compute[280939]: + sudo kolla_copy_cacerts Nov 23 04:42:54 localhost nova_compute[280939]: + [[ ! -n '' ]] Nov 23 04:42:54 localhost nova_compute[280939]: + . kolla_extend_start Nov 23 04:42:54 localhost nova_compute[280939]: + echo 'Running command: '\''nova-compute'\''' Nov 23 04:42:54 localhost nova_compute[280939]: Running command: 'nova-compute' Nov 23 04:42:54 localhost nova_compute[280939]: + umask 0022 Nov 23 04:42:54 localhost nova_compute[280939]: + exec nova-compute Nov 23 04:42:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:42:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:42:55 localhost systemd[1]: tmp-crun.jb9v3L.mount: Deactivated successfully. Nov 23 04:42:55 localhost podman[280969]: 2025-11-23 09:42:55.906241824 +0000 UTC m=+0.092083499 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible) Nov 23 04:42:55 localhost podman[280970]: 2025-11-23 09:42:55.950327294 +0000 UTC m=+0.131226597 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller) Nov 23 04:42:55 localhost podman[280969]: 2025-11-23 09:42:55.972330142 +0000 UTC m=+0.158171817 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:42:55 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:42:55 localhost nova_compute[280939]: 2025-11-23 09:42:55.991 280943 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 23 04:42:55 localhost nova_compute[280939]: 2025-11-23 09:42:55.992 280943 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 23 04:42:55 localhost nova_compute[280939]: 2025-11-23 09:42:55.992 280943 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 23 04:42:55 localhost nova_compute[280939]: 2025-11-23 09:42:55.992 280943 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Nov 23 04:42:56 localhost podman[280970]: 2025-11-23 09:42:56.014409979 +0000 UTC m=+0.195309322 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Nov 23 04:42:56 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.101 280943 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.124 280943 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.124 280943 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Nov 23 04:42:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63088 DF PROTO=TCP SPT=54012 DPT=9102 SEQ=614466098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B861E0F0000000001030307) Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.538 280943 INFO nova.virt.driver [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.652 280943 INFO nova.compute.provider_config [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.659 280943 DEBUG oslo_concurrency.lockutils [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.659 280943 DEBUG oslo_concurrency.lockutils [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.659 280943 DEBUG oslo_concurrency.lockutils [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.660 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.660 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.660 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.660 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.660 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.660 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.660 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.661 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.661 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.661 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.661 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.661 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.661 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.661 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.661 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.662 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.662 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.662 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.662 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.662 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] console_host = np0005532584.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.662 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.662 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.663 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.663 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.663 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.663 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.663 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.663 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.663 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.664 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.664 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.664 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.664 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.664 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.664 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.664 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.664 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.665 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.665 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] host = np0005532584.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.665 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.665 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.665 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.665 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.665 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.666 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.666 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.666 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.666 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.666 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.666 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.666 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.667 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.667 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.667 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.667 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.667 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.667 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.667 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.667 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.668 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.668 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.668 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.668 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.668 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.668 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.668 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.669 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.669 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.669 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.669 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.669 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.669 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.669 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.669 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.670 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.670 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.670 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.670 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.670 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.670 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.670 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] my_block_storage_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.671 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] my_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.671 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.671 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.671 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.671 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.671 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.671 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.671 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.672 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.672 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.672 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.672 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.672 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.672 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.672 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.672 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.673 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.673 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.673 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.673 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.673 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.673 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.673 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.674 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.674 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.674 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.674 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.674 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.674 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.674 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.674 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.675 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.675 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.675 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.675 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.675 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.675 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.675 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.676 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.676 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.676 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.676 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.676 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.676 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.676 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.676 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.677 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.677 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.677 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.677 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.677 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.677 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.677 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.677 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.678 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.678 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.678 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.678 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.678 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.678 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.678 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.678 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.679 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.679 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.679 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.679 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.679 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.679 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.679 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.680 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.680 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.680 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.680 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.680 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.680 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.680 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.681 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.681 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.681 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.681 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.681 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.681 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.681 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.681 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.682 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.682 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.682 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.682 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.682 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.682 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.682 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.683 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.683 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.683 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.683 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.683 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.683 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.683 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.683 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.684 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.684 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.684 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.684 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.684 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.684 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.684 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.685 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.685 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.685 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.685 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.685 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.685 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.685 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.685 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.686 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.686 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.686 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.686 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.686 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.686 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.686 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.687 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.687 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.687 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.687 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.687 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.687 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.687 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.687 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.688 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.688 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.688 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.688 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.688 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.688 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.688 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.688 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.689 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.689 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.689 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.689 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.689 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.689 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.689 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.689 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.690 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.690 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.690 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.690 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.690 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.690 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.690 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.691 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.691 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.691 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.691 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.691 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.691 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.691 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.691 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.692 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.692 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.692 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.692 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.692 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.692 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.692 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.692 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.693 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.693 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.693 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.693 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.693 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.693 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.693 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.694 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.694 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.694 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.694 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.694 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.694 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.694 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.694 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.695 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.695 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.695 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.695 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.695 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.695 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.695 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.696 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.696 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.696 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.696 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.696 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.696 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.696 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.696 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.697 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.697 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.697 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.697 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.697 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.697 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.697 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.698 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.698 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.698 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.698 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.698 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.698 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.698 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.699 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.699 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.699 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.699 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.699 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.699 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.699 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.699 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.700 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.700 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.700 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.700 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.700 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.700 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.700 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.700 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.701 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.701 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.701 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.701 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.701 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.701 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.701 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.702 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.702 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.702 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.702 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.702 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.702 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.702 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.702 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.703 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.703 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.703 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.703 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.703 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.703 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.703 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.703 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.704 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.704 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.704 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.704 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.704 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.704 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.704 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.704 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.705 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.705 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.705 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.705 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.705 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.705 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.705 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.706 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.706 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.706 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.706 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.706 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.706 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.706 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.706 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.707 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.707 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.707 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.707 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.707 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.707 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.708 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.708 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.708 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.708 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.708 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.708 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.708 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.708 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.709 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.709 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.709 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.709 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.709 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.709 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.709 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.710 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.710 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.710 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.710 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.710 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.710 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.710 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.710 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.711 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.711 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.711 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.711 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.711 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.711 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.711 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.712 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.712 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.712 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.712 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.712 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.712 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.712 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.712 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.713 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.713 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.713 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.713 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.713 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.713 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.713 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.713 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.714 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.714 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.714 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.714 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.714 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.714 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.714 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.715 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.715 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.715 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.715 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.715 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.715 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.715 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.715 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.716 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.716 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.716 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.716 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.716 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.716 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.716 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.716 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.717 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.717 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.717 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.717 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.717 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.717 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.717 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.717 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.718 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.718 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.718 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.718 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.718 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.718 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.718 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.719 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.719 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.719 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.719 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.719 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.719 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.719 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.719 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.720 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.720 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.720 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.720 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.720 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.720 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.720 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.721 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.721 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.721 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.721 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.721 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.721 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.721 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.721 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.722 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.722 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.722 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.722 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.722 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.722 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.722 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.723 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.723 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.723 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.723 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.723 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.723 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.723 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.723 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.724 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.724 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.724 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.724 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.724 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.724 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.724 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.724 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.725 280943 WARNING oslo_config.cfg [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Nov 23 04:42:56 localhost nova_compute[280939]: live_migration_uri is deprecated for removal in favor of two other options that Nov 23 04:42:56 localhost nova_compute[280939]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Nov 23 04:42:56 localhost nova_compute[280939]: and ``live_migration_inbound_addr`` respectively. Nov 23 04:42:56 localhost nova_compute[280939]: ). Its value may be silently ignored in the future.#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.725 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.725 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.725 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.725 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.725 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.726 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.726 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.726 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.726 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.726 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.726 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.726 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.726 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.727 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.727 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.727 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.727 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.727 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.727 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.rbd_secret_uuid = 46550e70-79cb-5f55-bf6d-1204b97e083b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.727 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.728 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.728 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.728 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.728 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.728 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.728 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.728 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.728 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.729 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.729 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.729 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.729 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.729 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.729 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.730 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.730 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.730 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.730 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.730 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.730 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.730 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.730 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.731 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.731 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.731 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.731 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.731 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.731 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.731 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.732 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.732 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.732 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.732 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.732 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.732 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.732 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.732 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.733 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.733 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.733 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.733 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.733 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.733 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.733 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.733 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.734 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.734 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.734 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.734 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.734 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.734 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.734 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.735 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.735 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.735 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.735 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.735 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.735 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.735 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.735 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.736 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.736 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.736 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.736 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.736 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.736 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.736 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.737 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.737 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.737 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.737 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.737 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.737 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.737 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.737 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.738 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.738 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.738 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.738 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.738 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.738 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.738 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.738 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.739 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.739 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.739 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.739 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.739 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.739 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.739 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.739 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.740 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.740 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.740 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.740 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.740 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.740 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.740 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.741 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.741 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.741 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.741 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.741 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.741 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.741 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.741 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.742 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.742 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.742 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.742 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.742 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.742 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.742 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.742 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.743 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.743 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.743 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.743 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.743 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.743 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.743 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.744 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.744 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.744 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.744 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.744 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.744 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.744 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.745 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.745 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.745 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.745 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.745 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.745 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.745 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.745 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.746 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.746 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.746 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.746 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.746 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.746 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.746 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.747 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.747 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.747 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.747 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.747 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.747 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.747 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.747 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.748 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.748 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.748 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.748 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.748 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.748 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.748 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.749 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.749 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.749 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.749 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.749 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.749 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.749 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.750 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.750 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.750 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.750 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.750 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.750 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.750 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.750 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.751 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.751 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.751 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.751 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.751 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.751 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.751 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.752 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.752 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.752 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.752 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.752 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.752 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.752 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.752 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.753 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.753 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.753 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.753 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.753 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.753 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.753 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.754 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.754 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.754 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.754 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.754 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.754 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.754 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.754 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.755 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.755 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.755 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.755 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.755 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.755 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.755 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.755 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.756 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.756 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.756 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.756 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.756 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.756 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.756 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.756 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.757 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.757 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.757 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.757 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.757 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.757 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.757 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.758 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.758 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.758 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.758 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.758 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.758 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.758 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vnc.server_proxyclient_address = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.759 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.759 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.759 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.759 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.759 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.759 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.759 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.760 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.760 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.760 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.760 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.760 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.760 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.760 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.760 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.761 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.761 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.761 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.761 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.761 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.761 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.761 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.761 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.762 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.762 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.762 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.762 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.762 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.762 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.762 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.763 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.763 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.763 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.763 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.763 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.763 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.763 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.763 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.764 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.764 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.764 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.764 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.764 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.764 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.764 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.765 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.765 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.765 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.765 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.765 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.765 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.765 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.766 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.766 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.766 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.766 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.766 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.766 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.766 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.766 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.767 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.767 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.767 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.767 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.767 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.767 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.767 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.768 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.768 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.768 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.768 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.768 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.768 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.768 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.768 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.769 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.769 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.769 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.769 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.769 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.769 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.769 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.770 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.770 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.770 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.770 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.770 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.770 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.770 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.771 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.771 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.771 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.771 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.771 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.771 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.771 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.771 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.772 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.772 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.772 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.772 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.772 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.772 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.772 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.772 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.773 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.773 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.773 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.773 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.773 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.773 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.773 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.774 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.774 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.774 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.774 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.774 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.774 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.774 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.775 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.775 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.775 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.775 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.775 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.775 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.775 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.775 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.776 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.776 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.776 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.776 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.776 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.776 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.776 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.776 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.777 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.777 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.777 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.777 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.777 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.777 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.777 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.778 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.778 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.778 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.778 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.778 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.778 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.778 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.779 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.779 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.779 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.779 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.779 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.779 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.779 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.779 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.780 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.780 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.780 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.780 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.780 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.780 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.780 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.780 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.781 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.781 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.781 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.781 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.781 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.781 280943 DEBUG oslo_service.service [None req-04a113dd-4bd8-48c0-a259-9037d0fa1bbe - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.782 280943 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.796 280943 INFO nova.virt.node [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Determined node identity c90c5769-42ab-40e9-92fc-3d82b4e96052 from /var/lib/nova/compute_id#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.796 280943 DEBUG nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.797 280943 DEBUG nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.797 280943 DEBUG nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.797 280943 DEBUG nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.807 280943 DEBUG nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.809 280943 DEBUG nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.810 280943 INFO nova.virt.libvirt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Connection event '1' reason 'None'#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.817 280943 INFO nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Libvirt host capabilities Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: df69e9ed-ec8d-43d9-8710-8ff360287019 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: x86_64 Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome-v4 Nov 23 04:42:56 localhost nova_compute[280939]: AMD Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: tcp Nov 23 04:42:56 localhost nova_compute[280939]: rdma Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: 16116612 Nov 23 04:42:56 localhost nova_compute[280939]: 4029153 Nov 23 04:42:56 localhost nova_compute[280939]: 0 Nov 23 04:42:56 localhost nova_compute[280939]: 0 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: selinux Nov 23 04:42:56 localhost nova_compute[280939]: 0 Nov 23 04:42:56 localhost nova_compute[280939]: system_u:system_r:svirt_t:s0 Nov 23 04:42:56 localhost nova_compute[280939]: system_u:system_r:svirt_tcg_t:s0 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: dac Nov 23 04:42:56 localhost nova_compute[280939]: 0 Nov 23 04:42:56 localhost nova_compute[280939]: +107:+107 Nov 23 04:42:56 localhost nova_compute[280939]: +107:+107 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: hvm Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: 32 Nov 23 04:42:56 localhost nova_compute[280939]: /usr/libexec/qemu-kvm Nov 23 04:42:56 localhost nova_compute[280939]: pc-i440fx-rhel7.6.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel9.8.0 Nov 23 04:42:56 localhost nova_compute[280939]: q35 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel9.6.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel8.6.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel9.4.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel8.5.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel8.3.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel7.6.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel8.4.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel9.2.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel8.2.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel9.0.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel8.0.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel8.1.0 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: hvm Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: 64 Nov 23 04:42:56 localhost nova_compute[280939]: /usr/libexec/qemu-kvm Nov 23 04:42:56 localhost nova_compute[280939]: pc-i440fx-rhel7.6.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel9.8.0 Nov 23 04:42:56 localhost nova_compute[280939]: q35 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel9.6.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel8.6.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel9.4.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel8.5.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel8.3.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel7.6.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel8.4.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel9.2.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel8.2.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel9.0.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel8.0.0 Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel8.1.0 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: #033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.828 280943 DEBUG nova.virt.libvirt.volume.mount [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.829 280943 DEBUG nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.834 280943 DEBUG nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: /usr/libexec/qemu-kvm Nov 23 04:42:56 localhost nova_compute[280939]: kvm Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel9.8.0 Nov 23 04:42:56 localhost nova_compute[280939]: i686 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: rom Nov 23 04:42:56 localhost nova_compute[280939]: pflash Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: yes Nov 23 04:42:56 localhost nova_compute[280939]: no Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: no Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: on Nov 23 04:42:56 localhost nova_compute[280939]: off Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: on Nov 23 04:42:56 localhost nova_compute[280939]: off Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome Nov 23 04:42:56 localhost nova_compute[280939]: AMD Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: 486 Nov 23 04:42:56 localhost nova_compute[280939]: 486-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-noTSX Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-noTSX-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-noTSX Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-v5 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Conroe Nov 23 04:42:56 localhost nova_compute[280939]: Conroe-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Cooperlake Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cooperlake-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cooperlake-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Denverton Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Denverton-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Denverton-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Denverton-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Dhyana Nov 23 04:42:56 localhost nova_compute[280939]: Dhyana-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Dhyana-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Genoa Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Genoa-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-IBPB Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Milan Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Milan-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Milan-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome-v4 Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-v1 Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-v2 Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: GraniteRapids Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: GraniteRapids-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: GraniteRapids-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-noTSX Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-noTSX-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-noTSX Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v5 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v6 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v7 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: IvyBridge Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: IvyBridge-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: IvyBridge-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: IvyBridge-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: KnightsMill Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: KnightsMill-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nehalem Nov 23 04:42:56 localhost nova_compute[280939]: Nehalem-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nehalem-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nehalem-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G1 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G1-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G2 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G2-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G3 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G3-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G4-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G5 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G5-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Penryn Nov 23 04:42:56 localhost nova_compute[280939]: Penryn-v1 Nov 23 04:42:56 localhost nova_compute[280939]: SandyBridge Nov 23 04:42:56 localhost nova_compute[280939]: SandyBridge-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: SandyBridge-v1 Nov 23 04:42:56 localhost nova_compute[280939]: SandyBridge-v2 Nov 23 04:42:56 localhost nova_compute[280939]: SapphireRapids Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: SapphireRapids-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: SapphireRapids-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: SapphireRapids-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: SierraForest Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: SierraForest-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Client Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Client-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Client-noTSX-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Client-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Client-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Client-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Client-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Server Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Server-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Server-noTSX-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Server-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Server-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Server-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Server-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Server-v5 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Snowridge Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Snowridge-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Snowridge-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Snowridge-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Snowridge-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Westmere Nov 23 04:42:56 localhost nova_compute[280939]: Westmere-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Westmere-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Westmere-v2 Nov 23 04:42:56 localhost nova_compute[280939]: athlon Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: athlon-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: core2duo Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: core2duo-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: coreduo Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: coreduo-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: kvm32 Nov 23 04:42:56 localhost nova_compute[280939]: kvm32-v1 Nov 23 04:42:56 localhost nova_compute[280939]: kvm64 Nov 23 04:42:56 localhost nova_compute[280939]: kvm64-v1 Nov 23 04:42:56 localhost nova_compute[280939]: n270 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: n270-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: pentium Nov 23 04:42:56 localhost nova_compute[280939]: pentium-v1 Nov 23 04:42:56 localhost nova_compute[280939]: pentium2 Nov 23 04:42:56 localhost nova_compute[280939]: pentium2-v1 Nov 23 04:42:56 localhost nova_compute[280939]: pentium3 Nov 23 04:42:56 localhost nova_compute[280939]: pentium3-v1 Nov 23 04:42:56 localhost nova_compute[280939]: phenom Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: phenom-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: qemu32 Nov 23 04:42:56 localhost nova_compute[280939]: qemu32-v1 Nov 23 04:42:56 localhost nova_compute[280939]: qemu64 Nov 23 04:42:56 localhost nova_compute[280939]: qemu64-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: file Nov 23 04:42:56 localhost nova_compute[280939]: anonymous Nov 23 04:42:56 localhost nova_compute[280939]: memfd Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: disk Nov 23 04:42:56 localhost nova_compute[280939]: cdrom Nov 23 04:42:56 localhost nova_compute[280939]: floppy Nov 23 04:42:56 localhost nova_compute[280939]: lun Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: fdc Nov 23 04:42:56 localhost nova_compute[280939]: scsi Nov 23 04:42:56 localhost nova_compute[280939]: virtio Nov 23 04:42:56 localhost nova_compute[280939]: usb Nov 23 04:42:56 localhost nova_compute[280939]: sata Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: virtio Nov 23 04:42:56 localhost nova_compute[280939]: virtio-transitional Nov 23 04:42:56 localhost nova_compute[280939]: virtio-non-transitional Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: vnc Nov 23 04:42:56 localhost nova_compute[280939]: egl-headless Nov 23 04:42:56 localhost nova_compute[280939]: dbus Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: subsystem Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: default Nov 23 04:42:56 localhost nova_compute[280939]: mandatory Nov 23 04:42:56 localhost nova_compute[280939]: requisite Nov 23 04:42:56 localhost nova_compute[280939]: optional Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: usb Nov 23 04:42:56 localhost nova_compute[280939]: pci Nov 23 04:42:56 localhost nova_compute[280939]: scsi Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: virtio Nov 23 04:42:56 localhost nova_compute[280939]: virtio-transitional Nov 23 04:42:56 localhost nova_compute[280939]: virtio-non-transitional Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: random Nov 23 04:42:56 localhost nova_compute[280939]: egd Nov 23 04:42:56 localhost nova_compute[280939]: builtin Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: path Nov 23 04:42:56 localhost nova_compute[280939]: handle Nov 23 04:42:56 localhost nova_compute[280939]: virtiofs Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: tpm-tis Nov 23 04:42:56 localhost nova_compute[280939]: tpm-crb Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: emulator Nov 23 04:42:56 localhost nova_compute[280939]: external Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: 2.0 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: usb Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: pty Nov 23 04:42:56 localhost nova_compute[280939]: unix Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: qemu Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: builtin Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: default Nov 23 04:42:56 localhost nova_compute[280939]: passt Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: isa Nov 23 04:42:56 localhost nova_compute[280939]: hyperv Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: null Nov 23 04:42:56 localhost nova_compute[280939]: vc Nov 23 04:42:56 localhost nova_compute[280939]: pty Nov 23 04:42:56 localhost nova_compute[280939]: dev Nov 23 04:42:56 localhost nova_compute[280939]: file Nov 23 04:42:56 localhost nova_compute[280939]: pipe Nov 23 04:42:56 localhost nova_compute[280939]: stdio Nov 23 04:42:56 localhost nova_compute[280939]: udp Nov 23 04:42:56 localhost nova_compute[280939]: tcp Nov 23 04:42:56 localhost nova_compute[280939]: unix Nov 23 04:42:56 localhost nova_compute[280939]: qemu-vdagent Nov 23 04:42:56 localhost nova_compute[280939]: dbus Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: relaxed Nov 23 04:42:56 localhost nova_compute[280939]: vapic Nov 23 04:42:56 localhost nova_compute[280939]: spinlocks Nov 23 04:42:56 localhost nova_compute[280939]: vpindex Nov 23 04:42:56 localhost nova_compute[280939]: runtime Nov 23 04:42:56 localhost nova_compute[280939]: synic Nov 23 04:42:56 localhost nova_compute[280939]: stimer Nov 23 04:42:56 localhost nova_compute[280939]: reset Nov 23 04:42:56 localhost nova_compute[280939]: vendor_id Nov 23 04:42:56 localhost nova_compute[280939]: frequencies Nov 23 04:42:56 localhost nova_compute[280939]: reenlightenment Nov 23 04:42:56 localhost nova_compute[280939]: tlbflush Nov 23 04:42:56 localhost nova_compute[280939]: ipi Nov 23 04:42:56 localhost nova_compute[280939]: avic Nov 23 04:42:56 localhost nova_compute[280939]: emsr_bitmap Nov 23 04:42:56 localhost nova_compute[280939]: xmm_input Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: 4095 Nov 23 04:42:56 localhost nova_compute[280939]: on Nov 23 04:42:56 localhost nova_compute[280939]: off Nov 23 04:42:56 localhost nova_compute[280939]: off Nov 23 04:42:56 localhost nova_compute[280939]: Linux KVM Hv Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: tdx Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.840 280943 DEBUG nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: /usr/libexec/qemu-kvm Nov 23 04:42:56 localhost nova_compute[280939]: kvm Nov 23 04:42:56 localhost nova_compute[280939]: pc-i440fx-rhel7.6.0 Nov 23 04:42:56 localhost nova_compute[280939]: i686 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: rom Nov 23 04:42:56 localhost nova_compute[280939]: pflash Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: yes Nov 23 04:42:56 localhost nova_compute[280939]: no Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: no Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: on Nov 23 04:42:56 localhost nova_compute[280939]: off Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: on Nov 23 04:42:56 localhost nova_compute[280939]: off Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome Nov 23 04:42:56 localhost nova_compute[280939]: AMD Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: 486 Nov 23 04:42:56 localhost nova_compute[280939]: 486-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-noTSX Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-noTSX-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-noTSX Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-v5 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Conroe Nov 23 04:42:56 localhost nova_compute[280939]: Conroe-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Cooperlake Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cooperlake-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cooperlake-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Denverton Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Denverton-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Denverton-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Denverton-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Dhyana Nov 23 04:42:56 localhost nova_compute[280939]: Dhyana-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Dhyana-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Genoa Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Genoa-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-IBPB Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Milan Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Milan-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Milan-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome-v4 Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-v1 Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-v2 Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: GraniteRapids Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: GraniteRapids-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: GraniteRapids-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-noTSX Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-noTSX-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-noTSX Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v5 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v6 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v7 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: IvyBridge Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: IvyBridge-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: IvyBridge-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: IvyBridge-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: KnightsMill Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: KnightsMill-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nehalem Nov 23 04:42:56 localhost nova_compute[280939]: Nehalem-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nehalem-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nehalem-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G1 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G1-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G2 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G2-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G3 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G3-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G4-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G5 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G5-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Penryn Nov 23 04:42:56 localhost nova_compute[280939]: Penryn-v1 Nov 23 04:42:56 localhost nova_compute[280939]: SandyBridge Nov 23 04:42:56 localhost nova_compute[280939]: SandyBridge-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: SandyBridge-v1 Nov 23 04:42:56 localhost nova_compute[280939]: SandyBridge-v2 Nov 23 04:42:56 localhost nova_compute[280939]: SapphireRapids Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: SapphireRapids-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: SapphireRapids-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: SapphireRapids-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: SierraForest Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: SierraForest-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Client Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Client-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Client-noTSX-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Client-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Client-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Client-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Client-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Server Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Server-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Server-noTSX-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Server-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Server-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Server-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Server-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Skylake-Server-v5 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Snowridge Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Snowridge-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Snowridge-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Snowridge-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Snowridge-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Westmere Nov 23 04:42:56 localhost nova_compute[280939]: Westmere-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Westmere-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Westmere-v2 Nov 23 04:42:56 localhost nova_compute[280939]: athlon Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: athlon-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: core2duo Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: core2duo-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: coreduo Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: coreduo-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: kvm32 Nov 23 04:42:56 localhost nova_compute[280939]: kvm32-v1 Nov 23 04:42:56 localhost nova_compute[280939]: kvm64 Nov 23 04:42:56 localhost nova_compute[280939]: kvm64-v1 Nov 23 04:42:56 localhost nova_compute[280939]: n270 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: n270-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: pentium Nov 23 04:42:56 localhost nova_compute[280939]: pentium-v1 Nov 23 04:42:56 localhost nova_compute[280939]: pentium2 Nov 23 04:42:56 localhost nova_compute[280939]: pentium2-v1 Nov 23 04:42:56 localhost nova_compute[280939]: pentium3 Nov 23 04:42:56 localhost nova_compute[280939]: pentium3-v1 Nov 23 04:42:56 localhost nova_compute[280939]: phenom Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: phenom-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: qemu32 Nov 23 04:42:56 localhost nova_compute[280939]: qemu32-v1 Nov 23 04:42:56 localhost nova_compute[280939]: qemu64 Nov 23 04:42:56 localhost nova_compute[280939]: qemu64-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: file Nov 23 04:42:56 localhost nova_compute[280939]: anonymous Nov 23 04:42:56 localhost nova_compute[280939]: memfd Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: disk Nov 23 04:42:56 localhost nova_compute[280939]: cdrom Nov 23 04:42:56 localhost nova_compute[280939]: floppy Nov 23 04:42:56 localhost nova_compute[280939]: lun Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: ide Nov 23 04:42:56 localhost nova_compute[280939]: fdc Nov 23 04:42:56 localhost nova_compute[280939]: scsi Nov 23 04:42:56 localhost nova_compute[280939]: virtio Nov 23 04:42:56 localhost nova_compute[280939]: usb Nov 23 04:42:56 localhost nova_compute[280939]: sata Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: virtio Nov 23 04:42:56 localhost nova_compute[280939]: virtio-transitional Nov 23 04:42:56 localhost nova_compute[280939]: virtio-non-transitional Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: vnc Nov 23 04:42:56 localhost nova_compute[280939]: egl-headless Nov 23 04:42:56 localhost nova_compute[280939]: dbus Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: subsystem Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: default Nov 23 04:42:56 localhost nova_compute[280939]: mandatory Nov 23 04:42:56 localhost nova_compute[280939]: requisite Nov 23 04:42:56 localhost nova_compute[280939]: optional Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: usb Nov 23 04:42:56 localhost nova_compute[280939]: pci Nov 23 04:42:56 localhost nova_compute[280939]: scsi Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: virtio Nov 23 04:42:56 localhost nova_compute[280939]: virtio-transitional Nov 23 04:42:56 localhost nova_compute[280939]: virtio-non-transitional Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: random Nov 23 04:42:56 localhost nova_compute[280939]: egd Nov 23 04:42:56 localhost nova_compute[280939]: builtin Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: path Nov 23 04:42:56 localhost nova_compute[280939]: handle Nov 23 04:42:56 localhost nova_compute[280939]: virtiofs Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: tpm-tis Nov 23 04:42:56 localhost nova_compute[280939]: tpm-crb Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: emulator Nov 23 04:42:56 localhost nova_compute[280939]: external Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: 2.0 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: usb Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: pty Nov 23 04:42:56 localhost nova_compute[280939]: unix Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: qemu Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: builtin Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: default Nov 23 04:42:56 localhost nova_compute[280939]: passt Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: isa Nov 23 04:42:56 localhost nova_compute[280939]: hyperv Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: null Nov 23 04:42:56 localhost nova_compute[280939]: vc Nov 23 04:42:56 localhost nova_compute[280939]: pty Nov 23 04:42:56 localhost nova_compute[280939]: dev Nov 23 04:42:56 localhost nova_compute[280939]: file Nov 23 04:42:56 localhost nova_compute[280939]: pipe Nov 23 04:42:56 localhost nova_compute[280939]: stdio Nov 23 04:42:56 localhost nova_compute[280939]: udp Nov 23 04:42:56 localhost nova_compute[280939]: tcp Nov 23 04:42:56 localhost nova_compute[280939]: unix Nov 23 04:42:56 localhost nova_compute[280939]: qemu-vdagent Nov 23 04:42:56 localhost nova_compute[280939]: dbus Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: relaxed Nov 23 04:42:56 localhost nova_compute[280939]: vapic Nov 23 04:42:56 localhost nova_compute[280939]: spinlocks Nov 23 04:42:56 localhost nova_compute[280939]: vpindex Nov 23 04:42:56 localhost nova_compute[280939]: runtime Nov 23 04:42:56 localhost nova_compute[280939]: synic Nov 23 04:42:56 localhost nova_compute[280939]: stimer Nov 23 04:42:56 localhost nova_compute[280939]: reset Nov 23 04:42:56 localhost nova_compute[280939]: vendor_id Nov 23 04:42:56 localhost nova_compute[280939]: frequencies Nov 23 04:42:56 localhost nova_compute[280939]: reenlightenment Nov 23 04:42:56 localhost nova_compute[280939]: tlbflush Nov 23 04:42:56 localhost nova_compute[280939]: ipi Nov 23 04:42:56 localhost nova_compute[280939]: avic Nov 23 04:42:56 localhost nova_compute[280939]: emsr_bitmap Nov 23 04:42:56 localhost nova_compute[280939]: xmm_input Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: 4095 Nov 23 04:42:56 localhost nova_compute[280939]: on Nov 23 04:42:56 localhost nova_compute[280939]: off Nov 23 04:42:56 localhost nova_compute[280939]: off Nov 23 04:42:56 localhost nova_compute[280939]: Linux KVM Hv Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: tdx Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.883 280943 DEBUG nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Nov 23 04:42:56 localhost nova_compute[280939]: 2025-11-23 09:42:56.888 280943 DEBUG nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: /usr/libexec/qemu-kvm Nov 23 04:42:56 localhost nova_compute[280939]: kvm Nov 23 04:42:56 localhost nova_compute[280939]: pc-q35-rhel9.8.0 Nov 23 04:42:56 localhost nova_compute[280939]: x86_64 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: efi Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Nov 23 04:42:56 localhost nova_compute[280939]: /usr/share/edk2/ovmf/OVMF_CODE.fd Nov 23 04:42:56 localhost nova_compute[280939]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Nov 23 04:42:56 localhost nova_compute[280939]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: rom Nov 23 04:42:56 localhost nova_compute[280939]: pflash Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: yes Nov 23 04:42:56 localhost nova_compute[280939]: no Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: yes Nov 23 04:42:56 localhost nova_compute[280939]: no Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: on Nov 23 04:42:56 localhost nova_compute[280939]: off Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: on Nov 23 04:42:56 localhost nova_compute[280939]: off Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome Nov 23 04:42:56 localhost nova_compute[280939]: AMD Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: 486 Nov 23 04:42:56 localhost nova_compute[280939]: 486-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-noTSX Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-noTSX-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Broadwell-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-noTSX Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cascadelake-Server-v5 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Conroe Nov 23 04:42:56 localhost nova_compute[280939]: Conroe-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Cooperlake Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cooperlake-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Cooperlake-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Denverton Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Denverton-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Denverton-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Denverton-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Dhyana Nov 23 04:42:56 localhost nova_compute[280939]: Dhyana-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Dhyana-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Genoa Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Genoa-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-IBPB Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Milan Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Milan-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Milan-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-Rome-v4 Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-v1 Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-v2 Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: EPYC-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: GraniteRapids Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: GraniteRapids-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: GraniteRapids-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-noTSX Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-noTSX-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Haswell-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-noTSX Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v3 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v5 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v6 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Icelake-Server-v7 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: IvyBridge Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: IvyBridge-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: IvyBridge-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: IvyBridge-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: KnightsMill Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: KnightsMill-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nehalem Nov 23 04:42:56 localhost nova_compute[280939]: Nehalem-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: Nehalem-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nehalem-v2 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G1 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G1-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G2 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G2-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G3 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G3-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G4 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G4-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G5 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Opteron_G5-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Penryn Nov 23 04:42:56 localhost nova_compute[280939]: Penryn-v1 Nov 23 04:42:56 localhost nova_compute[280939]: SandyBridge Nov 23 04:42:56 localhost nova_compute[280939]: SandyBridge-IBRS Nov 23 04:42:56 localhost nova_compute[280939]: SandyBridge-v1 Nov 23 04:42:56 localhost nova_compute[280939]: SandyBridge-v2 Nov 23 04:42:56 localhost nova_compute[280939]: SapphireRapids Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: SapphireRapids-v1 Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:56 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: SapphireRapids-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: SapphireRapids-v3 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: SierraForest Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: SierraForest-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Client Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Client-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Client-noTSX-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Client-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Client-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Client-v3 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Client-v4 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Server Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Server-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Server-noTSX-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Server-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Server-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Server-v3 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Server-v4 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Server-v5 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Snowridge Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Snowridge-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Snowridge-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Snowridge-v3 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Snowridge-v4 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Westmere Nov 23 04:42:57 localhost nova_compute[280939]: Westmere-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: Westmere-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Westmere-v2 Nov 23 04:42:57 localhost nova_compute[280939]: athlon Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: athlon-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: core2duo Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: core2duo-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: coreduo Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: coreduo-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: kvm32 Nov 23 04:42:57 localhost nova_compute[280939]: kvm32-v1 Nov 23 04:42:57 localhost nova_compute[280939]: kvm64 Nov 23 04:42:57 localhost nova_compute[280939]: kvm64-v1 Nov 23 04:42:57 localhost nova_compute[280939]: n270 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: n270-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: pentium Nov 23 04:42:57 localhost nova_compute[280939]: pentium-v1 Nov 23 04:42:57 localhost nova_compute[280939]: pentium2 Nov 23 04:42:57 localhost nova_compute[280939]: pentium2-v1 Nov 23 04:42:57 localhost nova_compute[280939]: pentium3 Nov 23 04:42:57 localhost nova_compute[280939]: pentium3-v1 Nov 23 04:42:57 localhost nova_compute[280939]: phenom Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: phenom-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: qemu32 Nov 23 04:42:57 localhost nova_compute[280939]: qemu32-v1 Nov 23 04:42:57 localhost nova_compute[280939]: qemu64 Nov 23 04:42:57 localhost nova_compute[280939]: qemu64-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: file Nov 23 04:42:57 localhost nova_compute[280939]: anonymous Nov 23 04:42:57 localhost nova_compute[280939]: memfd Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: disk Nov 23 04:42:57 localhost nova_compute[280939]: cdrom Nov 23 04:42:57 localhost nova_compute[280939]: floppy Nov 23 04:42:57 localhost nova_compute[280939]: lun Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: fdc Nov 23 04:42:57 localhost nova_compute[280939]: scsi Nov 23 04:42:57 localhost nova_compute[280939]: virtio Nov 23 04:42:57 localhost nova_compute[280939]: usb Nov 23 04:42:57 localhost nova_compute[280939]: sata Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: virtio Nov 23 04:42:57 localhost nova_compute[280939]: virtio-transitional Nov 23 04:42:57 localhost nova_compute[280939]: virtio-non-transitional Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: vnc Nov 23 04:42:57 localhost nova_compute[280939]: egl-headless Nov 23 04:42:57 localhost nova_compute[280939]: dbus Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: subsystem Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: default Nov 23 04:42:57 localhost nova_compute[280939]: mandatory Nov 23 04:42:57 localhost nova_compute[280939]: requisite Nov 23 04:42:57 localhost nova_compute[280939]: optional Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: usb Nov 23 04:42:57 localhost nova_compute[280939]: pci Nov 23 04:42:57 localhost nova_compute[280939]: scsi Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: virtio Nov 23 04:42:57 localhost nova_compute[280939]: virtio-transitional Nov 23 04:42:57 localhost nova_compute[280939]: virtio-non-transitional Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: random Nov 23 04:42:57 localhost nova_compute[280939]: egd Nov 23 04:42:57 localhost nova_compute[280939]: builtin Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: path Nov 23 04:42:57 localhost nova_compute[280939]: handle Nov 23 04:42:57 localhost nova_compute[280939]: virtiofs Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: tpm-tis Nov 23 04:42:57 localhost nova_compute[280939]: tpm-crb Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: emulator Nov 23 04:42:57 localhost nova_compute[280939]: external Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: 2.0 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: usb Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: pty Nov 23 04:42:57 localhost nova_compute[280939]: unix Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: qemu Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: builtin Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: default Nov 23 04:42:57 localhost nova_compute[280939]: passt Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: isa Nov 23 04:42:57 localhost nova_compute[280939]: hyperv Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: null Nov 23 04:42:57 localhost nova_compute[280939]: vc Nov 23 04:42:57 localhost nova_compute[280939]: pty Nov 23 04:42:57 localhost nova_compute[280939]: dev Nov 23 04:42:57 localhost nova_compute[280939]: file Nov 23 04:42:57 localhost nova_compute[280939]: pipe Nov 23 04:42:57 localhost nova_compute[280939]: stdio Nov 23 04:42:57 localhost nova_compute[280939]: udp Nov 23 04:42:57 localhost nova_compute[280939]: tcp Nov 23 04:42:57 localhost nova_compute[280939]: unix Nov 23 04:42:57 localhost nova_compute[280939]: qemu-vdagent Nov 23 04:42:57 localhost nova_compute[280939]: dbus Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: relaxed Nov 23 04:42:57 localhost nova_compute[280939]: vapic Nov 23 04:42:57 localhost nova_compute[280939]: spinlocks Nov 23 04:42:57 localhost nova_compute[280939]: vpindex Nov 23 04:42:57 localhost nova_compute[280939]: runtime Nov 23 04:42:57 localhost nova_compute[280939]: synic Nov 23 04:42:57 localhost nova_compute[280939]: stimer Nov 23 04:42:57 localhost nova_compute[280939]: reset Nov 23 04:42:57 localhost nova_compute[280939]: vendor_id Nov 23 04:42:57 localhost nova_compute[280939]: frequencies Nov 23 04:42:57 localhost nova_compute[280939]: reenlightenment Nov 23 04:42:57 localhost nova_compute[280939]: tlbflush Nov 23 04:42:57 localhost nova_compute[280939]: ipi Nov 23 04:42:57 localhost nova_compute[280939]: avic Nov 23 04:42:57 localhost nova_compute[280939]: emsr_bitmap Nov 23 04:42:57 localhost nova_compute[280939]: xmm_input Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: 4095 Nov 23 04:42:57 localhost nova_compute[280939]: on Nov 23 04:42:57 localhost nova_compute[280939]: off Nov 23 04:42:57 localhost nova_compute[280939]: off Nov 23 04:42:57 localhost nova_compute[280939]: Linux KVM Hv Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: tdx Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:56.945 280943 DEBUG nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: /usr/libexec/qemu-kvm Nov 23 04:42:57 localhost nova_compute[280939]: kvm Nov 23 04:42:57 localhost nova_compute[280939]: pc-i440fx-rhel7.6.0 Nov 23 04:42:57 localhost nova_compute[280939]: x86_64 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: rom Nov 23 04:42:57 localhost nova_compute[280939]: pflash Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: yes Nov 23 04:42:57 localhost nova_compute[280939]: no Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: no Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: on Nov 23 04:42:57 localhost nova_compute[280939]: off Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: on Nov 23 04:42:57 localhost nova_compute[280939]: off Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: EPYC-Rome Nov 23 04:42:57 localhost nova_compute[280939]: AMD Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: 486 Nov 23 04:42:57 localhost nova_compute[280939]: 486-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Broadwell Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Broadwell-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Broadwell-noTSX Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Broadwell-noTSX-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Broadwell-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Broadwell-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Broadwell-v3 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Broadwell-v4 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Cascadelake-Server Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Cascadelake-Server-noTSX Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Cascadelake-Server-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Cascadelake-Server-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Cascadelake-Server-v3 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Cascadelake-Server-v4 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Cascadelake-Server-v5 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Conroe Nov 23 04:42:57 localhost nova_compute[280939]: Conroe-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Cooperlake Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Cooperlake-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Cooperlake-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Denverton Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Denverton-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Denverton-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Denverton-v3 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Dhyana Nov 23 04:42:57 localhost nova_compute[280939]: Dhyana-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Dhyana-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: EPYC Nov 23 04:42:57 localhost nova_compute[280939]: EPYC-Genoa Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: EPYC-Genoa-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: EPYC-IBPB Nov 23 04:42:57 localhost nova_compute[280939]: EPYC-Milan Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: EPYC-Milan-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: EPYC-Milan-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: EPYC-Rome Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: EPYC-Rome-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: EPYC-Rome-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: EPYC-Rome-v3 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: EPYC-Rome-v4 Nov 23 04:42:57 localhost nova_compute[280939]: EPYC-v1 Nov 23 04:42:57 localhost nova_compute[280939]: EPYC-v2 Nov 23 04:42:57 localhost nova_compute[280939]: EPYC-v3 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: EPYC-v4 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: GraniteRapids Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: GraniteRapids-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: GraniteRapids-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Haswell Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Haswell-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Haswell-noTSX Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Haswell-noTSX-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Haswell-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Haswell-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Haswell-v3 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Haswell-v4 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Icelake-Server Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Icelake-Server-noTSX Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Icelake-Server-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Icelake-Server-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Icelake-Server-v3 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Icelake-Server-v4 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Icelake-Server-v5 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Icelake-Server-v6 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Icelake-Server-v7 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: IvyBridge Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: IvyBridge-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: IvyBridge-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: IvyBridge-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: KnightsMill Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: KnightsMill-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nehalem Nov 23 04:42:57 localhost nova_compute[280939]: Nehalem-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: Nehalem-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nehalem-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Opteron_G1 Nov 23 04:42:57 localhost nova_compute[280939]: Opteron_G1-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Opteron_G2 Nov 23 04:42:57 localhost nova_compute[280939]: Opteron_G2-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Opteron_G3 Nov 23 04:42:57 localhost nova_compute[280939]: Opteron_G3-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Opteron_G4 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Opteron_G4-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Opteron_G5 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Opteron_G5-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Penryn Nov 23 04:42:57 localhost nova_compute[280939]: Penryn-v1 Nov 23 04:42:57 localhost nova_compute[280939]: SandyBridge Nov 23 04:42:57 localhost nova_compute[280939]: SandyBridge-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: SandyBridge-v1 Nov 23 04:42:57 localhost nova_compute[280939]: SandyBridge-v2 Nov 23 04:42:57 localhost nova_compute[280939]: SapphireRapids Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: SapphireRapids-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: SapphireRapids-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: SapphireRapids-v3 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: SierraForest Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: SierraForest-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Client Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Client-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Client-noTSX-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Client-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Client-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Client-v3 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Client-v4 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Server Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Server-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Server-noTSX-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Server-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Server-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Server-v3 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Server-v4 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Skylake-Server-v5 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Snowridge Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Snowridge-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Snowridge-v2 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Snowridge-v3 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Snowridge-v4 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Westmere Nov 23 04:42:57 localhost nova_compute[280939]: Westmere-IBRS Nov 23 04:42:57 localhost nova_compute[280939]: Westmere-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Westmere-v2 Nov 23 04:42:57 localhost nova_compute[280939]: athlon Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: athlon-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: core2duo Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: core2duo-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: coreduo Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: coreduo-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: kvm32 Nov 23 04:42:57 localhost nova_compute[280939]: kvm32-v1 Nov 23 04:42:57 localhost nova_compute[280939]: kvm64 Nov 23 04:42:57 localhost nova_compute[280939]: kvm64-v1 Nov 23 04:42:57 localhost nova_compute[280939]: n270 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: n270-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: pentium Nov 23 04:42:57 localhost nova_compute[280939]: pentium-v1 Nov 23 04:42:57 localhost nova_compute[280939]: pentium2 Nov 23 04:42:57 localhost nova_compute[280939]: pentium2-v1 Nov 23 04:42:57 localhost nova_compute[280939]: pentium3 Nov 23 04:42:57 localhost nova_compute[280939]: pentium3-v1 Nov 23 04:42:57 localhost nova_compute[280939]: phenom Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: phenom-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: qemu32 Nov 23 04:42:57 localhost nova_compute[280939]: qemu32-v1 Nov 23 04:42:57 localhost nova_compute[280939]: qemu64 Nov 23 04:42:57 localhost nova_compute[280939]: qemu64-v1 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: file Nov 23 04:42:57 localhost nova_compute[280939]: anonymous Nov 23 04:42:57 localhost nova_compute[280939]: memfd Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: disk Nov 23 04:42:57 localhost nova_compute[280939]: cdrom Nov 23 04:42:57 localhost nova_compute[280939]: floppy Nov 23 04:42:57 localhost nova_compute[280939]: lun Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: ide Nov 23 04:42:57 localhost nova_compute[280939]: fdc Nov 23 04:42:57 localhost nova_compute[280939]: scsi Nov 23 04:42:57 localhost nova_compute[280939]: virtio Nov 23 04:42:57 localhost nova_compute[280939]: usb Nov 23 04:42:57 localhost nova_compute[280939]: sata Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: virtio Nov 23 04:42:57 localhost nova_compute[280939]: virtio-transitional Nov 23 04:42:57 localhost nova_compute[280939]: virtio-non-transitional Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: vnc Nov 23 04:42:57 localhost nova_compute[280939]: egl-headless Nov 23 04:42:57 localhost nova_compute[280939]: dbus Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: subsystem Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: default Nov 23 04:42:57 localhost nova_compute[280939]: mandatory Nov 23 04:42:57 localhost nova_compute[280939]: requisite Nov 23 04:42:57 localhost nova_compute[280939]: optional Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: usb Nov 23 04:42:57 localhost nova_compute[280939]: pci Nov 23 04:42:57 localhost nova_compute[280939]: scsi Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: virtio Nov 23 04:42:57 localhost nova_compute[280939]: virtio-transitional Nov 23 04:42:57 localhost nova_compute[280939]: virtio-non-transitional Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: random Nov 23 04:42:57 localhost nova_compute[280939]: egd Nov 23 04:42:57 localhost nova_compute[280939]: builtin Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: path Nov 23 04:42:57 localhost nova_compute[280939]: handle Nov 23 04:42:57 localhost nova_compute[280939]: virtiofs Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: tpm-tis Nov 23 04:42:57 localhost nova_compute[280939]: tpm-crb Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: emulator Nov 23 04:42:57 localhost nova_compute[280939]: external Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: 2.0 Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: usb Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: pty Nov 23 04:42:57 localhost nova_compute[280939]: unix Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: qemu Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: builtin Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: default Nov 23 04:42:57 localhost nova_compute[280939]: passt Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: isa Nov 23 04:42:57 localhost nova_compute[280939]: hyperv Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: null Nov 23 04:42:57 localhost nova_compute[280939]: vc Nov 23 04:42:57 localhost nova_compute[280939]: pty Nov 23 04:42:57 localhost nova_compute[280939]: dev Nov 23 04:42:57 localhost nova_compute[280939]: file Nov 23 04:42:57 localhost nova_compute[280939]: pipe Nov 23 04:42:57 localhost nova_compute[280939]: stdio Nov 23 04:42:57 localhost nova_compute[280939]: udp Nov 23 04:42:57 localhost nova_compute[280939]: tcp Nov 23 04:42:57 localhost nova_compute[280939]: unix Nov 23 04:42:57 localhost nova_compute[280939]: qemu-vdagent Nov 23 04:42:57 localhost nova_compute[280939]: dbus Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: relaxed Nov 23 04:42:57 localhost nova_compute[280939]: vapic Nov 23 04:42:57 localhost nova_compute[280939]: spinlocks Nov 23 04:42:57 localhost nova_compute[280939]: vpindex Nov 23 04:42:57 localhost nova_compute[280939]: runtime Nov 23 04:42:57 localhost nova_compute[280939]: synic Nov 23 04:42:57 localhost nova_compute[280939]: stimer Nov 23 04:42:57 localhost nova_compute[280939]: reset Nov 23 04:42:57 localhost nova_compute[280939]: vendor_id Nov 23 04:42:57 localhost nova_compute[280939]: frequencies Nov 23 04:42:57 localhost nova_compute[280939]: reenlightenment Nov 23 04:42:57 localhost nova_compute[280939]: tlbflush Nov 23 04:42:57 localhost nova_compute[280939]: ipi Nov 23 04:42:57 localhost nova_compute[280939]: avic Nov 23 04:42:57 localhost nova_compute[280939]: emsr_bitmap Nov 23 04:42:57 localhost nova_compute[280939]: xmm_input Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: 4095 Nov 23 04:42:57 localhost nova_compute[280939]: on Nov 23 04:42:57 localhost nova_compute[280939]: off Nov 23 04:42:57 localhost nova_compute[280939]: off Nov 23 04:42:57 localhost nova_compute[280939]: Linux KVM Hv Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: tdx Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: Nov 23 04:42:57 localhost nova_compute[280939]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.000 280943 DEBUG nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.000 280943 INFO nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Secure Boot support detected#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.002 280943 INFO nova.virt.libvirt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.003 280943 INFO nova.virt.libvirt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.013 280943 DEBUG nova.virt.libvirt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.041 280943 INFO nova.virt.node [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Determined node identity c90c5769-42ab-40e9-92fc-3d82b4e96052 from /var/lib/nova/compute_id#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.080 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Verified node c90c5769-42ab-40e9-92fc-3d82b4e96052 matches my host np0005532584.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.122 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.269 280943 DEBUG oslo_concurrency.lockutils [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.270 280943 DEBUG oslo_concurrency.lockutils [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.270 280943 DEBUG oslo_concurrency.lockutils [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.271 280943 DEBUG nova.compute.resource_tracker [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.271 280943 DEBUG oslo_concurrency.processutils [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.722 280943 DEBUG oslo_concurrency.processutils [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.875 280943 WARNING nova.virt.libvirt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.876 280943 DEBUG nova.compute.resource_tracker [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12866MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.876 280943 DEBUG oslo_concurrency.lockutils [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:42:57 localhost nova_compute[280939]: 2025-11-23 09:42:57.876 280943 DEBUG oslo_concurrency.lockutils [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.003 280943 DEBUG nova.compute.resource_tracker [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.003 280943 DEBUG nova.compute.resource_tracker [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:42:58 localhost python3.9[281151]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.076 280943 DEBUG nova.scheduler.client.report [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Refreshing inventories for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.093 280943 DEBUG nova.scheduler.client.report [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Updating ProviderTree inventory for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.094 280943 DEBUG nova.compute.provider_tree [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Updating inventory in ProviderTree for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.117 280943 DEBUG nova.scheduler.client.report [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Refreshing aggregate associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.138 280943 DEBUG nova.scheduler.client.report [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Refreshing trait associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.153 280943 DEBUG oslo_concurrency.processutils [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:42:58 localhost systemd[1]: Started libpod-conmon-da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424.scope. Nov 23 04:42:58 localhost systemd[1]: Started libcrun container. Nov 23 04:42:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67ad61ffa011d2a6ffe840fd10b604169240085fe7d81fe96db01b6b54a383bd/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Nov 23 04:42:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67ad61ffa011d2a6ffe840fd10b604169240085fe7d81fe96db01b6b54a383bd/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 23 04:42:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67ad61ffa011d2a6ffe840fd10b604169240085fe7d81fe96db01b6b54a383bd/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Nov 23 04:42:58 localhost podman[281178]: 2025-11-23 09:42:58.335390454 +0000 UTC m=+0.131102822 container init da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS) Nov 23 04:42:58 localhost podman[281178]: 2025-11-23 09:42:58.345864417 +0000 UTC m=+0.141576795 container start da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 23 04:42:58 localhost python3.9[281151]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Applying nova statedir ownership Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/4143dbbec5b08621aa3c8eb364f8a7d3e97604e18b7ed41c4bab0da11ed561fd Nov 23 04:42:58 localhost nova_compute_init[281218]: INFO:nova_statedir:Nova statedir ownership complete Nov 23 04:42:58 localhost systemd[1]: libpod-da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424.scope: Deactivated successfully. Nov 23 04:42:58 localhost podman[281219]: 2025-11-23 09:42:58.419094265 +0000 UTC m=+0.055659577 container died da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm) Nov 23 04:42:58 localhost podman[281232]: 2025-11-23 09:42:58.495146569 +0000 UTC m=+0.074719755 container cleanup da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=nova_compute_init, org.label-schema.license=GPLv2) Nov 23 04:42:58 localhost systemd[1]: libpod-conmon-da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424.scope: Deactivated successfully. Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.653 280943 DEBUG oslo_concurrency.processutils [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.660 280943 DEBUG nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Nov 23 04:42:58 localhost nova_compute[280939]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.660 280943 INFO nova.virt.libvirt.host [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] kernel doesn't support AMD SEV#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.661 280943 DEBUG nova.compute.provider_tree [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.661 280943 DEBUG nova.virt.libvirt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.752 280943 DEBUG nova.scheduler.client.report [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.784 280943 DEBUG nova.compute.resource_tracker [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.785 280943 DEBUG oslo_concurrency.lockutils [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.785 280943 DEBUG nova.service [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.812 280943 DEBUG nova.service [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Nov 23 04:42:58 localhost nova_compute[280939]: 2025-11-23 09:42:58.812 280943 DEBUG nova.servicegroup.drivers.db [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] DB_Driver: join new ServiceGroup member np0005532584.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Nov 23 04:42:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:42:59 localhost podman[281278]: 2025-11-23 09:42:59.143882167 +0000 UTC m=+0.078939694 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:42:59 localhost podman[281278]: 2025-11-23 09:42:59.158472206 +0000 UTC m=+0.093529773 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:42:59 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:42:59 localhost systemd[1]: var-lib-containers-storage-overlay-67ad61ffa011d2a6ffe840fd10b604169240085fe7d81fe96db01b6b54a383bd-merged.mount: Deactivated successfully. Nov 23 04:42:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-da304ece88471592b5edf54647755d57b992711bb55a5f385bc46c006232b424-userdata-shm.mount: Deactivated successfully. Nov 23 04:42:59 localhost systemd[1]: session-59.scope: Deactivated successfully. Nov 23 04:42:59 localhost systemd[1]: session-59.scope: Consumed 1min 27.562s CPU time. Nov 23 04:42:59 localhost systemd-logind[760]: Session 59 logged out. Waiting for processes to exit. Nov 23 04:42:59 localhost systemd-logind[760]: Removed session 59. Nov 23 04:43:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63089 DF PROTO=TCP SPT=54012 DPT=9102 SEQ=614466098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B863E0F0000000001030307) Nov 23 04:43:06 localhost openstack_network_exporter[241732]: ERROR 09:43:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:43:06 localhost openstack_network_exporter[241732]: ERROR 09:43:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:43:06 localhost openstack_network_exporter[241732]: ERROR 09:43:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:43:06 localhost openstack_network_exporter[241732]: ERROR 09:43:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:43:06 localhost openstack_network_exporter[241732]: Nov 23 04:43:06 localhost openstack_network_exporter[241732]: ERROR 09:43:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:43:06 localhost openstack_network_exporter[241732]: Nov 23 04:43:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:43:06 localhost systemd[1]: tmp-crun.hqlYNE.mount: Deactivated successfully. Nov 23 04:43:06 localhost podman[281301]: 2025-11-23 09:43:06.884854571 +0000 UTC m=+0.071965897 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, release=1755695350, architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6) Nov 23 04:43:06 localhost podman[281301]: 2025-11-23 09:43:06.897468247 +0000 UTC m=+0.084579633 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public) Nov 23 04:43:06 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:43:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:43:09.724 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:43:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:43:09.725 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:43:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:43:09.725 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:43:09 localhost nova_compute[280939]: 2025-11-23 09:43:09.816 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:43:09 localhost nova_compute[280939]: 2025-11-23 09:43:09.833 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.573 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:43:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:43:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:43:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:43:13 localhost podman[281323]: 2025-11-23 09:43:13.897749258 +0000 UTC m=+0.084527302 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:43:13 localhost podman[281323]: 2025-11-23 09:43:13.908449126 +0000 UTC m=+0.095227150 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Nov 23 04:43:13 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:43:14 localhost podman[281324]: 2025-11-23 09:43:14.000294491 +0000 UTC m=+0.183526896 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:43:14 localhost podman[281324]: 2025-11-23 09:43:14.008028298 +0000 UTC m=+0.191260733 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:43:14 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:43:17 localhost podman[239764]: time="2025-11-23T09:43:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:43:17 localhost podman[239764]: @ - - [23/Nov/2025:09:43:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146334 "" "Go-http-client/1.1" Nov 23 04:43:17 localhost podman[239764]: @ - - [23/Nov/2025:09:43:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16744 "" "Go-http-client/1.1" Nov 23 04:43:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30646 DF PROTO=TCP SPT=36104 DPT=9102 SEQ=3289832149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8677930000000001030307) Nov 23 04:43:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30647 DF PROTO=TCP SPT=36104 DPT=9102 SEQ=3289832149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B867B8F0000000001030307) Nov 23 04:43:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:43:20 localhost podman[281366]: 2025-11-23 09:43:20.883736689 +0000 UTC m=+0.071874814 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 04:43:20 localhost podman[281366]: 2025-11-23 09:43:20.918420662 +0000 UTC m=+0.106558837 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 04:43:20 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:43:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63090 DF PROTO=TCP SPT=54012 DPT=9102 SEQ=614466098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B867E0F0000000001030307) Nov 23 04:43:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30648 DF PROTO=TCP SPT=36104 DPT=9102 SEQ=3289832149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B86838F0000000001030307) Nov 23 04:43:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39364 DF PROTO=TCP SPT=44788 DPT=9102 SEQ=2083171469 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B86880F0000000001030307) Nov 23 04:43:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:43:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:43:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30649 DF PROTO=TCP SPT=36104 DPT=9102 SEQ=3289832149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B86934F0000000001030307) Nov 23 04:43:26 localhost systemd[1]: tmp-crun.9PAEPO.mount: Deactivated successfully. Nov 23 04:43:26 localhost podman[281467]: 2025-11-23 09:43:26.480353085 +0000 UTC m=+0.079585581 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:43:26 localhost podman[281467]: 2025-11-23 09:43:26.49223704 +0000 UTC m=+0.091469566 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Nov 23 04:43:26 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:43:26 localhost podman[281468]: 2025-11-23 09:43:26.536233778 +0000 UTC m=+0.131504212 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:43:26 localhost podman[281468]: 2025-11-23 09:43:26.623363599 +0000 UTC m=+0.218634033 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 04:43:26 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:43:27 localhost podman[281567]: Nov 23 04:43:27 localhost podman[281567]: 2025-11-23 09:43:27.061876849 +0000 UTC m=+0.077083133 container create 4c01bfbde8a77ddfb47d3a8f648e7ef277eb76027eea78ae9e1130c9776d126d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_liskov, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, release=553, GIT_BRANCH=main, RELEASE=main) Nov 23 04:43:27 localhost systemd[1]: Started libpod-conmon-4c01bfbde8a77ddfb47d3a8f648e7ef277eb76027eea78ae9e1130c9776d126d.scope. Nov 23 04:43:27 localhost systemd[1]: Started libcrun container. Nov 23 04:43:27 localhost podman[281567]: 2025-11-23 09:43:27.021772451 +0000 UTC m=+0.036978795 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:43:27 localhost podman[281567]: 2025-11-23 09:43:27.133196406 +0000 UTC m=+0.148402690 container init 4c01bfbde8a77ddfb47d3a8f648e7ef277eb76027eea78ae9e1130c9776d126d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_liskov, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vcs-type=git, version=7) Nov 23 04:43:27 localhost podman[281567]: 2025-11-23 09:43:27.143394249 +0000 UTC m=+0.158600543 container start 4c01bfbde8a77ddfb47d3a8f648e7ef277eb76027eea78ae9e1130c9776d126d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_liskov, io.buildah.version=1.33.12, RELEASE=main, io.openshift.expose-services=, GIT_CLEAN=True, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, name=rhceph, CEPH_POINT_RELEASE=, release=553, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 04:43:27 localhost podman[281567]: 2025-11-23 09:43:27.143792081 +0000 UTC m=+0.158998365 container attach 4c01bfbde8a77ddfb47d3a8f648e7ef277eb76027eea78ae9e1130c9776d126d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_liskov, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, maintainer=Guillaume Abrioux , version=7, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhceph, GIT_BRANCH=main, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, architecture=x86_64, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:43:27 localhost relaxed_liskov[281583]: 167 167 Nov 23 04:43:27 localhost systemd[1]: libpod-4c01bfbde8a77ddfb47d3a8f648e7ef277eb76027eea78ae9e1130c9776d126d.scope: Deactivated successfully. Nov 23 04:43:27 localhost podman[281567]: 2025-11-23 09:43:27.146866325 +0000 UTC m=+0.162072609 container died 4c01bfbde8a77ddfb47d3a8f648e7ef277eb76027eea78ae9e1130c9776d126d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_liskov, vendor=Red Hat, Inc., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, distribution-scope=public, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, architecture=x86_64, name=rhceph, io.buildah.version=1.33.12, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:43:27 localhost podman[281588]: 2025-11-23 09:43:27.246623042 +0000 UTC m=+0.085353006 container remove 4c01bfbde8a77ddfb47d3a8f648e7ef277eb76027eea78ae9e1130c9776d126d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_liskov, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_BRANCH=main, distribution-scope=public, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_CLEAN=True, io.openshift.expose-services=, vcs-type=git, release=553, ceph=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:43:27 localhost systemd[1]: libpod-conmon-4c01bfbde8a77ddfb47d3a8f648e7ef277eb76027eea78ae9e1130c9776d126d.scope: Deactivated successfully. Nov 23 04:43:27 localhost podman[281610]: Nov 23 04:43:27 localhost podman[281610]: 2025-11-23 09:43:27.454377711 +0000 UTC m=+0.068742338 container create 9f7a9cb4d13c422f3c8326bc5d69e0b4a72d5e71cea634aacc60e68562b43f03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_chaplygin, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-type=git, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , version=7, com.redhat.component=rhceph-container, io.openshift.expose-services=, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:43:27 localhost systemd[1]: var-lib-containers-storage-overlay-dbc506dff3733c66d8229ad25c99aa05f3e714102d05d0b8db878c462531f0b1-merged.mount: Deactivated successfully. Nov 23 04:43:27 localhost systemd[1]: Started libpod-conmon-9f7a9cb4d13c422f3c8326bc5d69e0b4a72d5e71cea634aacc60e68562b43f03.scope. Nov 23 04:43:27 localhost systemd[1]: Started libcrun container. Nov 23 04:43:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec05346d4ef34204b2e2a3566e4299080bfdac8dd5956a8f8e752a414a2b767/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 23 04:43:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec05346d4ef34204b2e2a3566e4299080bfdac8dd5956a8f8e752a414a2b767/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 23 04:43:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec05346d4ef34204b2e2a3566e4299080bfdac8dd5956a8f8e752a414a2b767/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 04:43:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec05346d4ef34204b2e2a3566e4299080bfdac8dd5956a8f8e752a414a2b767/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 04:43:27 localhost podman[281610]: 2025-11-23 09:43:27.514663519 +0000 UTC m=+0.129028156 container init 9f7a9cb4d13c422f3c8326bc5d69e0b4a72d5e71cea634aacc60e68562b43f03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_chaplygin, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, version=7, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_BRANCH=main, GIT_CLEAN=True) Nov 23 04:43:27 localhost podman[281610]: 2025-11-23 09:43:27.528603766 +0000 UTC m=+0.142968403 container start 9f7a9cb4d13c422f3c8326bc5d69e0b4a72d5e71cea634aacc60e68562b43f03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_chaplygin, distribution-scope=public, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, version=7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, CEPH_POINT_RELEASE=) Nov 23 04:43:27 localhost podman[281610]: 2025-11-23 09:43:27.528849833 +0000 UTC m=+0.143214470 container attach 9f7a9cb4d13c422f3c8326bc5d69e0b4a72d5e71cea634aacc60e68562b43f03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_chaplygin, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.openshift.expose-services=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., RELEASE=main, release=553, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:43:27 localhost podman[281610]: 2025-11-23 09:43:27.430019704 +0000 UTC m=+0.044384321 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:43:28 localhost boring_chaplygin[281624]: [ Nov 23 04:43:28 localhost boring_chaplygin[281624]: { Nov 23 04:43:28 localhost boring_chaplygin[281624]: "available": false, Nov 23 04:43:28 localhost boring_chaplygin[281624]: "ceph_device": false, Nov 23 04:43:28 localhost boring_chaplygin[281624]: "device_id": "QEMU_DVD-ROM_QM00001", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "lsm_data": {}, Nov 23 04:43:28 localhost boring_chaplygin[281624]: "lvs": [], Nov 23 04:43:28 localhost boring_chaplygin[281624]: "path": "/dev/sr0", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "rejected_reasons": [ Nov 23 04:43:28 localhost boring_chaplygin[281624]: "Has a FileSystem", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "Insufficient space (<5GB)" Nov 23 04:43:28 localhost boring_chaplygin[281624]: ], Nov 23 04:43:28 localhost boring_chaplygin[281624]: "sys_api": { Nov 23 04:43:28 localhost boring_chaplygin[281624]: "actuators": null, Nov 23 04:43:28 localhost boring_chaplygin[281624]: "device_nodes": "sr0", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "human_readable_size": "482.00 KB", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "id_bus": "ata", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "model": "QEMU DVD-ROM", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "nr_requests": "2", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "partitions": {}, Nov 23 04:43:28 localhost boring_chaplygin[281624]: "path": "/dev/sr0", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "removable": "1", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "rev": "2.5+", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "ro": "0", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "rotational": "1", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "sas_address": "", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "sas_device_handle": "", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "scheduler_mode": "mq-deadline", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "sectors": 0, Nov 23 04:43:28 localhost boring_chaplygin[281624]: "sectorsize": "2048", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "size": 493568.0, Nov 23 04:43:28 localhost boring_chaplygin[281624]: "support_discard": "0", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "type": "disk", Nov 23 04:43:28 localhost boring_chaplygin[281624]: "vendor": "QEMU" Nov 23 04:43:28 localhost boring_chaplygin[281624]: } Nov 23 04:43:28 localhost boring_chaplygin[281624]: } Nov 23 04:43:28 localhost boring_chaplygin[281624]: ] Nov 23 04:43:28 localhost systemd[1]: libpod-9f7a9cb4d13c422f3c8326bc5d69e0b4a72d5e71cea634aacc60e68562b43f03.scope: Deactivated successfully. Nov 23 04:43:28 localhost podman[281610]: 2025-11-23 09:43:28.384621405 +0000 UTC m=+0.998986012 container died 9f7a9cb4d13c422f3c8326bc5d69e0b4a72d5e71cea634aacc60e68562b43f03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_chaplygin, RELEASE=main, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, version=7, name=rhceph, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, GIT_BRANCH=main, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True) Nov 23 04:43:28 localhost podman[283267]: 2025-11-23 09:43:28.462702358 +0000 UTC m=+0.068173971 container remove 9f7a9cb4d13c422f3c8326bc5d69e0b4a72d5e71cea634aacc60e68562b43f03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_chaplygin, vendor=Red Hat, Inc., GIT_CLEAN=True, ceph=True, maintainer=Guillaume Abrioux , name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, version=7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, RELEASE=main, GIT_BRANCH=main) Nov 23 04:43:28 localhost systemd[1]: libpod-conmon-9f7a9cb4d13c422f3c8326bc5d69e0b4a72d5e71cea634aacc60e68562b43f03.scope: Deactivated successfully. Nov 23 04:43:28 localhost systemd[1]: var-lib-containers-storage-overlay-8ec05346d4ef34204b2e2a3566e4299080bfdac8dd5956a8f8e752a414a2b767-merged.mount: Deactivated successfully. Nov 23 04:43:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:43:29 localhost systemd[1]: tmp-crun.V3exuj.mount: Deactivated successfully. Nov 23 04:43:29 localhost podman[283300]: 2025-11-23 09:43:29.909322319 +0000 UTC m=+0.093452165 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:43:29 localhost podman[283300]: 2025-11-23 09:43:29.922411421 +0000 UTC m=+0.106541257 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:43:29 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:43:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:43:30.222 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:43:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:43:30.223 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 04:43:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:43:34.225 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:43:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30650 DF PROTO=TCP SPT=36104 DPT=9102 SEQ=3289832149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B86B4100000000001030307) Nov 23 04:43:36 localhost openstack_network_exporter[241732]: ERROR 09:43:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:43:36 localhost openstack_network_exporter[241732]: ERROR 09:43:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:43:36 localhost openstack_network_exporter[241732]: ERROR 09:43:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:43:36 localhost openstack_network_exporter[241732]: ERROR 09:43:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:43:36 localhost openstack_network_exporter[241732]: Nov 23 04:43:36 localhost openstack_network_exporter[241732]: ERROR 09:43:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:43:36 localhost openstack_network_exporter[241732]: Nov 23 04:43:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:43:37 localhost podman[283322]: 2025-11-23 09:43:37.882373337 +0000 UTC m=+0.073966647 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, distribution-scope=public) Nov 23 04:43:37 localhost podman[283322]: 2025-11-23 09:43:37.891444865 +0000 UTC m=+0.083038185 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41) Nov 23 04:43:37 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:43:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:43:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:43:44 localhost podman[283342]: 2025-11-23 09:43:44.901868028 +0000 UTC m=+0.087714499 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118) Nov 23 04:43:44 localhost systemd[1]: tmp-crun.iXUQ6f.mount: Deactivated successfully. Nov 23 04:43:44 localhost podman[283343]: 2025-11-23 09:43:44.950851439 +0000 UTC m=+0.133135890 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:43:44 localhost podman[283343]: 2025-11-23 09:43:44.961385642 +0000 UTC m=+0.143670133 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:43:44 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:43:45 localhost podman[283342]: 2025-11-23 09:43:45.018957298 +0000 UTC m=+0.204803749 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm) Nov 23 04:43:45 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:43:47 localhost podman[239764]: time="2025-11-23T09:43:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:43:47 localhost podman[239764]: @ - - [23/Nov/2025:09:43:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146334 "" "Go-http-client/1.1" Nov 23 04:43:47 localhost podman[239764]: @ - - [23/Nov/2025:09:43:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16754 "" "Go-http-client/1.1" Nov 23 04:43:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11751 DF PROTO=TCP SPT=41594 DPT=9102 SEQ=2301459237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B86ECC40000000001030307) Nov 23 04:43:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11752 DF PROTO=TCP SPT=41594 DPT=9102 SEQ=2301459237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B86F0CF0000000001030307) Nov 23 04:43:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30651 DF PROTO=TCP SPT=36104 DPT=9102 SEQ=3289832149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B86F40F0000000001030307) Nov 23 04:43:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:43:51 localhost systemd[1]: tmp-crun.rpBnyG.mount: Deactivated successfully. Nov 23 04:43:51 localhost podman[283381]: 2025-11-23 09:43:51.886062916 +0000 UTC m=+0.073152153 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:43:51 localhost podman[283381]: 2025-11-23 09:43:51.916096027 +0000 UTC m=+0.103185304 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 23 04:43:51 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:43:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11753 DF PROTO=TCP SPT=41594 DPT=9102 SEQ=2301459237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B86F8CF0000000001030307) Nov 23 04:43:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63091 DF PROTO=TCP SPT=54012 DPT=9102 SEQ=614466098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B86FC0F0000000001030307) Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.134 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.135 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.135 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.136 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.149 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.150 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.151 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.151 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.151 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.152 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.152 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.152 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.153 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.170 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.171 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.171 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.171 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.172 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:43:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11754 DF PROTO=TCP SPT=41594 DPT=9102 SEQ=2301459237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B87088F0000000001030307) Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.636 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:43:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:43:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.860 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.862 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12854MB free_disk=41.83708190917969GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.862 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.863 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:43:56 localhost systemd[1]: tmp-crun.2XVF7P.mount: Deactivated successfully. Nov 23 04:43:56 localhost podman[283423]: 2025-11-23 09:43:56.922126111 +0000 UTC m=+0.100148881 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.945 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.946 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:43:56 localhost systemd[1]: tmp-crun.shB5BG.mount: Deactivated successfully. Nov 23 04:43:56 localhost podman[283424]: 2025-11-23 09:43:56.966139069 +0000 UTC m=+0.140214478 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS) Nov 23 04:43:56 localhost nova_compute[280939]: 2025-11-23 09:43:56.975 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:43:56 localhost podman[283423]: 2025-11-23 09:43:56.983392388 +0000 UTC m=+0.161415198 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 04:43:57 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:43:57 localhost podman[283424]: 2025-11-23 09:43:57.040708925 +0000 UTC m=+0.214784304 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 04:43:57 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:43:57 localhost nova_compute[280939]: 2025-11-23 09:43:57.449 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:43:57 localhost nova_compute[280939]: 2025-11-23 09:43:57.456 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:43:57 localhost nova_compute[280939]: 2025-11-23 09:43:57.483 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:43:57 localhost nova_compute[280939]: 2025-11-23 09:43:57.486 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:43:57 localhost nova_compute[280939]: 2025-11-23 09:43:57.486 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:44:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:44:00 localhost podman[283488]: 2025-11-23 09:44:00.296205032 +0000 UTC m=+0.075551257 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:44:00 localhost podman[283488]: 2025-11-23 09:44:00.303248058 +0000 UTC m=+0.082594283 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:44:00 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:44:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11755 DF PROTO=TCP SPT=41594 DPT=9102 SEQ=2301459237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B87280F0000000001030307) Nov 23 04:44:06 localhost openstack_network_exporter[241732]: ERROR 09:44:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:44:06 localhost openstack_network_exporter[241732]: ERROR 09:44:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:44:06 localhost openstack_network_exporter[241732]: ERROR 09:44:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:44:06 localhost openstack_network_exporter[241732]: ERROR 09:44:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:44:06 localhost openstack_network_exporter[241732]: Nov 23 04:44:06 localhost openstack_network_exporter[241732]: ERROR 09:44:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:44:06 localhost openstack_network_exporter[241732]: Nov 23 04:44:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:44:08 localhost podman[283510]: 2025-11-23 09:44:08.897184568 +0000 UTC m=+0.080645153 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vendor=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=) Nov 23 04:44:08 localhost podman[283510]: 2025-11-23 09:44:08.910540348 +0000 UTC m=+0.094000953 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible) Nov 23 04:44:08 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:44:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:44:09.725 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:44:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:44:09.725 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:44:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:44:09.725 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:44:14 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Nov 23 04:44:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:44:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:44:15 localhost podman[283543]: 2025-11-23 09:44:15.925722006 +0000 UTC m=+0.075793888 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:44:15 localhost podman[283543]: 2025-11-23 09:44:15.93049836 +0000 UTC m=+0.080570232 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:44:15 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:44:15 localhost podman[283532]: 2025-11-23 09:44:15.896024758 +0000 UTC m=+0.086608672 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 04:44:15 localhost podman[283532]: 2025-11-23 09:44:15.980588298 +0000 UTC m=+0.171172262 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 23 04:44:15 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:44:17 localhost podman[239764]: time="2025-11-23T09:44:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:44:17 localhost podman[239764]: @ - - [23/Nov/2025:09:44:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146334 "" "Go-http-client/1.1" Nov 23 04:44:17 localhost podman[239764]: @ - - [23/Nov/2025:09:44:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16752 "" "Go-http-client/1.1" Nov 23 04:44:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49015 DF PROTO=TCP SPT=38702 DPT=9102 SEQ=1863427701 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8761F30000000001030307) Nov 23 04:44:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49016 DF PROTO=TCP SPT=38702 DPT=9102 SEQ=1863427701 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B87660F0000000001030307) Nov 23 04:44:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11756 DF PROTO=TCP SPT=41594 DPT=9102 SEQ=2301459237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B87680F0000000001030307) Nov 23 04:44:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49017 DF PROTO=TCP SPT=38702 DPT=9102 SEQ=1863427701 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B876E0F0000000001030307) Nov 23 04:44:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:44:22 localhost podman[283571]: 2025-11-23 09:44:22.91542885 +0000 UTC m=+0.099866028 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:44:22 localhost podman[283571]: 2025-11-23 09:44:22.951090968 +0000 UTC m=+0.135528136 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Nov 23 04:44:22 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:44:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30652 DF PROTO=TCP SPT=36104 DPT=9102 SEQ=3289832149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B87720F0000000001030307) Nov 23 04:44:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49018 DF PROTO=TCP SPT=38702 DPT=9102 SEQ=1863427701 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B877DCF0000000001030307) Nov 23 04:44:26 localhost ovn_metadata_agent[159410]: 2025-11-23 09:44:26.721 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:44:26 localhost ovn_metadata_agent[159410]: 2025-11-23 09:44:26.722 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 04:44:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:44:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:44:27 localhost systemd[1]: tmp-crun.X4QO5o.mount: Deactivated successfully. Nov 23 04:44:27 localhost podman[283591]: 2025-11-23 09:44:27.901329143 +0000 UTC m=+0.084799587 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Nov 23 04:44:27 localhost systemd[1]: tmp-crun.nPgOrv.mount: Deactivated successfully. Nov 23 04:44:27 localhost podman[283590]: 2025-11-23 09:44:27.972336968 +0000 UTC m=+0.157405531 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 04:44:27 localhost podman[283591]: 2025-11-23 09:44:27.979475452 +0000 UTC m=+0.162945816 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 04:44:27 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:44:28 localhost podman[283590]: 2025-11-23 09:44:28.032413686 +0000 UTC m=+0.217482259 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible) Nov 23 04:44:28 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:44:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:44:30.723 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:44:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:44:30 localhost podman[283701]: 2025-11-23 09:44:30.899027541 +0000 UTC m=+0.083588792 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 04:44:30 localhost podman[283701]: 2025-11-23 09:44:30.93542802 +0000 UTC m=+0.119989241 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:44:30 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:44:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49019 DF PROTO=TCP SPT=38702 DPT=9102 SEQ=1863427701 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B879E0F0000000001030307) Nov 23 04:44:36 localhost openstack_network_exporter[241732]: ERROR 09:44:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:44:36 localhost openstack_network_exporter[241732]: ERROR 09:44:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:44:36 localhost openstack_network_exporter[241732]: ERROR 09:44:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:44:36 localhost openstack_network_exporter[241732]: ERROR 09:44:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:44:36 localhost openstack_network_exporter[241732]: Nov 23 04:44:36 localhost openstack_network_exporter[241732]: ERROR 09:44:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:44:36 localhost openstack_network_exporter[241732]: Nov 23 04:44:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:44:39 localhost podman[283742]: 2025-11-23 09:44:39.894770754 +0000 UTC m=+0.084195530 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Nov 23 04:44:39 localhost podman[283742]: 2025-11-23 09:44:39.936456331 +0000 UTC m=+0.125881087 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.expose-services=) Nov 23 04:44:39 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:44:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:44:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:44:46 localhost podman[283763]: 2025-11-23 09:44:46.916146175 +0000 UTC m=+0.092616132 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute) Nov 23 04:44:46 localhost podman[283763]: 2025-11-23 09:44:46.925830594 +0000 UTC m=+0.102300561 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 23 04:44:46 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:44:47 localhost podman[283764]: 2025-11-23 09:44:47.021152736 +0000 UTC m=+0.195638925 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:44:47 localhost podman[283764]: 2025-11-23 09:44:47.034423163 +0000 UTC m=+0.208909322 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:44:47 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:44:47 localhost podman[239764]: time="2025-11-23T09:44:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:44:47 localhost podman[239764]: @ - - [23/Nov/2025:09:44:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146334 "" "Go-http-client/1.1" Nov 23 04:44:47 localhost podman[239764]: @ - - [23/Nov/2025:09:44:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16757 "" "Go-http-client/1.1" Nov 23 04:44:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53730 DF PROTO=TCP SPT=42056 DPT=9102 SEQ=2465560541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B87D7240000000001030307) Nov 23 04:44:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53731 DF PROTO=TCP SPT=42056 DPT=9102 SEQ=2465560541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B87DB0F0000000001030307) Nov 23 04:44:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49020 DF PROTO=TCP SPT=38702 DPT=9102 SEQ=1863427701 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B87DE0F0000000001030307) Nov 23 04:44:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53732 DF PROTO=TCP SPT=42056 DPT=9102 SEQ=2465560541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B87E30F0000000001030307) Nov 23 04:44:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11757 DF PROTO=TCP SPT=41594 DPT=9102 SEQ=2301459237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B87E60F0000000001030307) Nov 23 04:44:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:44:53 localhost podman[283806]: 2025-11-23 09:44:53.91785642 +0000 UTC m=+0.104998942 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Nov 23 04:44:53 localhost podman[283806]: 2025-11-23 09:44:53.925432997 +0000 UTC m=+0.112575479 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:44:53 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:44:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53733 DF PROTO=TCP SPT=42056 DPT=9102 SEQ=2465560541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B87F2CF0000000001030307) Nov 23 04:44:57 localhost nova_compute[280939]: 2025-11-23 09:44:57.480 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:44:57 localhost nova_compute[280939]: 2025-11-23 09:44:57.506 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:44:57 localhost nova_compute[280939]: 2025-11-23 09:44:57.507 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:44:57 localhost nova_compute[280939]: 2025-11-23 09:44:57.507 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:44:57 localhost nova_compute[280939]: 2025-11-23 09:44:57.527 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:44:57 localhost nova_compute[280939]: 2025-11-23 09:44:57.528 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:44:57 localhost nova_compute[280939]: 2025-11-23 09:44:57.528 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:44:57 localhost nova_compute[280939]: 2025-11-23 09:44:57.529 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:44:57 localhost nova_compute[280939]: 2025-11-23 09:44:57.529 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:44:57 localhost nova_compute[280939]: 2025-11-23 09:44:57.529 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:44:57 localhost nova_compute[280939]: 2025-11-23 09:44:57.550 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:44:57 localhost nova_compute[280939]: 2025-11-23 09:44:57.551 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:44:57 localhost nova_compute[280939]: 2025-11-23 09:44:57.551 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:44:57 localhost nova_compute[280939]: 2025-11-23 09:44:57.551 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:44:57 localhost nova_compute[280939]: 2025-11-23 09:44:57.552 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:44:58 localhost nova_compute[280939]: 2025-11-23 09:44:58.007 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:44:58 localhost nova_compute[280939]: 2025-11-23 09:44:58.224 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:44:58 localhost nova_compute[280939]: 2025-11-23 09:44:58.226 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12875MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:44:58 localhost nova_compute[280939]: 2025-11-23 09:44:58.227 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:44:58 localhost nova_compute[280939]: 2025-11-23 09:44:58.227 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:44:58 localhost nova_compute[280939]: 2025-11-23 09:44:58.303 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:44:58 localhost nova_compute[280939]: 2025-11-23 09:44:58.303 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:44:58 localhost nova_compute[280939]: 2025-11-23 09:44:58.331 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:44:58 localhost sshd[283866]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:44:58 localhost nova_compute[280939]: 2025-11-23 09:44:58.809 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:44:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:44:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:44:58 localhost nova_compute[280939]: 2025-11-23 09:44:58.817 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:44:58 localhost nova_compute[280939]: 2025-11-23 09:44:58.837 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:44:58 localhost nova_compute[280939]: 2025-11-23 09:44:58.840 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:44:58 localhost nova_compute[280939]: 2025-11-23 09:44:58.841 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:44:58 localhost podman[283870]: 2025-11-23 09:44:58.900972449 +0000 UTC m=+0.077882761 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118) Nov 23 04:44:58 localhost systemd[1]: tmp-crun.wgeQu3.mount: Deactivated successfully. Nov 23 04:44:58 localhost podman[283869]: 2025-11-23 09:44:58.984484806 +0000 UTC m=+0.161406629 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0) Nov 23 04:44:58 localhost podman[283869]: 2025-11-23 09:44:58.999219996 +0000 UTC m=+0.176141829 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 23 04:44:59 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:44:59 localhost podman[283870]: 2025-11-23 09:44:59.009229645 +0000 UTC m=+0.186139877 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible) Nov 23 04:44:59 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:44:59 localhost nova_compute[280939]: 2025-11-23 09:44:59.444 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:44:59 localhost nova_compute[280939]: 2025-11-23 09:44:59.446 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:44:59 localhost nova_compute[280939]: 2025-11-23 09:44:59.446 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:44:59 localhost nova_compute[280939]: 2025-11-23 09:44:59.446 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:45:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:45:01 localhost podman[283914]: 2025-11-23 09:45:01.899369877 +0000 UTC m=+0.080782468 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:45:01 localhost podman[283914]: 2025-11-23 09:45:01.936121747 +0000 UTC m=+0.117534298 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 04:45:01 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:45:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53734 DF PROTO=TCP SPT=42056 DPT=9102 SEQ=2465560541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8814100000000001030307) Nov 23 04:45:06 localhost openstack_network_exporter[241732]: ERROR 09:45:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:45:06 localhost openstack_network_exporter[241732]: ERROR 09:45:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:45:06 localhost openstack_network_exporter[241732]: Nov 23 04:45:06 localhost openstack_network_exporter[241732]: ERROR 09:45:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:45:06 localhost openstack_network_exporter[241732]: ERROR 09:45:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:45:06 localhost openstack_network_exporter[241732]: ERROR 09:45:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:45:06 localhost openstack_network_exporter[241732]: Nov 23 04:45:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:45:09.725 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:45:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:45:09.726 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:45:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:45:09.726 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:45:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:45:10 localhost podman[283937]: 2025-11-23 09:45:10.913724257 +0000 UTC m=+0.095839508 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, vcs-type=git) Nov 23 04:45:10 localhost podman[283937]: 2025-11-23 09:45:10.928589361 +0000 UTC m=+0.110704652 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=9.6, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Nov 23 04:45:10 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:45:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:45:17 localhost podman[239764]: time="2025-11-23T09:45:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:45:17 localhost podman[239764]: @ - - [23/Nov/2025:09:45:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146334 "" "Go-http-client/1.1" Nov 23 04:45:17 localhost podman[239764]: @ - - [23/Nov/2025:09:45:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16758 "" "Go-http-client/1.1" Nov 23 04:45:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:45:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:45:17 localhost podman[283956]: 2025-11-23 09:45:17.903418474 +0000 UTC m=+0.088180519 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118) Nov 23 04:45:17 localhost podman[283956]: 2025-11-23 09:45:17.915380255 +0000 UTC m=+0.100142340 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:45:17 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:45:17 localhost podman[283957]: 2025-11-23 09:45:17.967760982 +0000 UTC m=+0.146605474 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:45:17 localhost podman[283957]: 2025-11-23 09:45:17.99794613 +0000 UTC m=+0.176790612 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:45:18 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:45:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13230 DF PROTO=TCP SPT=56506 DPT=9102 SEQ=3355021789 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B884C540000000001030307) Nov 23 04:45:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13231 DF PROTO=TCP SPT=56506 DPT=9102 SEQ=3355021789 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B88504F0000000001030307) Nov 23 04:45:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53735 DF PROTO=TCP SPT=42056 DPT=9102 SEQ=2465560541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B88540F0000000001030307) Nov 23 04:45:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13232 DF PROTO=TCP SPT=56506 DPT=9102 SEQ=3355021789 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B88584F0000000001030307) Nov 23 04:45:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49021 DF PROTO=TCP SPT=38702 DPT=9102 SEQ=1863427701 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B885C100000000001030307) Nov 23 04:45:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:45:24 localhost systemd[1]: tmp-crun.jgJsNK.mount: Deactivated successfully. Nov 23 04:45:24 localhost podman[283998]: 2025-11-23 09:45:24.917011196 +0000 UTC m=+0.093671790 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 23 04:45:24 localhost podman[283998]: 2025-11-23 09:45:24.921604088 +0000 UTC m=+0.098264682 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 23 04:45:24 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:45:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13233 DF PROTO=TCP SPT=56506 DPT=9102 SEQ=3355021789 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8868100000000001030307) Nov 23 04:45:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:45:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:45:29 localhost podman[284018]: 2025-11-23 09:45:29.913217011 +0000 UTC m=+0.094811666 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 04:45:29 localhost podman[284018]: 2025-11-23 09:45:29.954792752 +0000 UTC m=+0.136387437 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118) Nov 23 04:45:29 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:45:30 localhost podman[284019]: 2025-11-23 09:45:29.957046292 +0000 UTC m=+0.134663763 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Nov 23 04:45:30 localhost podman[284019]: 2025-11-23 09:45:30.040844665 +0000 UTC m=+0.218462106 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 23 04:45:30 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:45:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:45:32 localhost systemd[1]: tmp-crun.9RF71j.mount: Deactivated successfully. Nov 23 04:45:32 localhost podman[284080]: 2025-11-23 09:45:32.70632705 +0000 UTC m=+0.095370013 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:45:32 localhost podman[284080]: 2025-11-23 09:45:32.714035219 +0000 UTC m=+0.103078152 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:45:32 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:45:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13234 DF PROTO=TCP SPT=56506 DPT=9102 SEQ=3355021789 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8888100000000001030307) Nov 23 04:45:36 localhost openstack_network_exporter[241732]: ERROR 09:45:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:45:36 localhost openstack_network_exporter[241732]: ERROR 09:45:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:45:36 localhost openstack_network_exporter[241732]: ERROR 09:45:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:45:36 localhost openstack_network_exporter[241732]: ERROR 09:45:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:45:36 localhost openstack_network_exporter[241732]: Nov 23 04:45:36 localhost openstack_network_exporter[241732]: ERROR 09:45:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:45:36 localhost openstack_network_exporter[241732]: Nov 23 04:45:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:45:41 localhost podman[284172]: 2025-11-23 09:45:41.89615541 +0000 UTC m=+0.081319747 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Nov 23 04:45:41 localhost podman[284172]: 2025-11-23 09:45:41.935376078 +0000 UTC m=+0.120540405 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 04:45:41 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:45:47 localhost podman[239764]: time="2025-11-23T09:45:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:45:47 localhost podman[239764]: @ - - [23/Nov/2025:09:45:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146334 "" "Go-http-client/1.1" Nov 23 04:45:47 localhost podman[239764]: @ - - [23/Nov/2025:09:45:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16751 "" "Go-http-client/1.1" Nov 23 04:45:47 localhost sshd[284192]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:45:47 localhost systemd-logind[760]: New session 61 of user zuul. Nov 23 04:45:47 localhost systemd[1]: Started Session 61 of User zuul. Nov 23 04:45:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:45:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:45:48 localhost podman[284195]: 2025-11-23 09:45:48.055073417 +0000 UTC m=+0.111019060 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Nov 23 04:45:48 localhost podman[284195]: 2025-11-23 09:45:48.066014136 +0000 UTC m=+0.121959820 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Nov 23 04:45:48 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:45:48 localhost podman[284231]: 2025-11-23 09:45:48.145086792 +0000 UTC m=+0.086555329 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:45:48 localhost podman[284231]: 2025-11-23 09:45:48.155700692 +0000 UTC m=+0.097169199 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:45:48 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:45:48 localhost python3[284238]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:45:48 localhost subscription-manager[284256]: Unregistered machine with identity: 9009da62-c986-416e-b62f-4d365f106d60 Nov 23 04:45:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45745 DF PROTO=TCP SPT=35696 DPT=9102 SEQ=702317380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B88C1840000000001030307) Nov 23 04:45:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45746 DF PROTO=TCP SPT=35696 DPT=9102 SEQ=702317380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B88C58F0000000001030307) Nov 23 04:45:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13235 DF PROTO=TCP SPT=56506 DPT=9102 SEQ=3355021789 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B88C80F0000000001030307) Nov 23 04:45:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45747 DF PROTO=TCP SPT=35696 DPT=9102 SEQ=702317380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B88CD8F0000000001030307) Nov 23 04:45:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53736 DF PROTO=TCP SPT=42056 DPT=9102 SEQ=2465560541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B88D20F0000000001030307) Nov 23 04:45:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:45:55 localhost podman[284258]: 2025-11-23 09:45:55.899230325 +0000 UTC m=+0.086135707 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 04:45:55 localhost podman[284258]: 2025-11-23 09:45:55.904372215 +0000 UTC m=+0.091277617 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 23 04:45:55 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:45:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45748 DF PROTO=TCP SPT=35696 DPT=9102 SEQ=702317380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B88DD4F0000000001030307) Nov 23 04:45:57 localhost nova_compute[280939]: 2025-11-23 09:45:57.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.154 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.156 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.156 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.156 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.176 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.176 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.177 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.177 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.177 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.635 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.868 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.871 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12864MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.871 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.872 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.941 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.941 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:45:58 localhost nova_compute[280939]: 2025-11-23 09:45:58.956 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:45:59 localhost nova_compute[280939]: 2025-11-23 09:45:59.398 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:45:59 localhost nova_compute[280939]: 2025-11-23 09:45:59.405 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:45:59 localhost nova_compute[280939]: 2025-11-23 09:45:59.425 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:45:59 localhost nova_compute[280939]: 2025-11-23 09:45:59.428 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:45:59 localhost nova_compute[280939]: 2025-11-23 09:45:59.428 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:46:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:46:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:46:00 localhost podman[284321]: 2025-11-23 09:46:00.313354621 +0000 UTC m=+0.087121627 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 04:46:00 localhost systemd[1]: tmp-crun.wWEl8J.mount: Deactivated successfully. Nov 23 04:46:00 localhost podman[284322]: 2025-11-23 09:46:00.374289604 +0000 UTC m=+0.144165129 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller) Nov 23 04:46:00 localhost podman[284321]: 2025-11-23 09:46:00.39994943 +0000 UTC m=+0.173716486 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:46:00 localhost nova_compute[280939]: 2025-11-23 09:46:00.405 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:46:00 localhost nova_compute[280939]: 2025-11-23 09:46:00.406 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:46:00 localhost nova_compute[280939]: 2025-11-23 09:46:00.406 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:46:00 localhost nova_compute[280939]: 2025-11-23 09:46:00.407 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:46:00 localhost nova_compute[280939]: 2025-11-23 09:46:00.407 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:46:00 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:46:00 localhost podman[284322]: 2025-11-23 09:46:00.443037448 +0000 UTC m=+0.212912983 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:46:00 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:46:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:46:02 localhost podman[284364]: 2025-11-23 09:46:02.889035107 +0000 UTC m=+0.074941078 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:46:02 localhost podman[284364]: 2025-11-23 09:46:02.897156699 +0000 UTC m=+0.083062660 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:46:02 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:46:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45749 DF PROTO=TCP SPT=35696 DPT=9102 SEQ=702317380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B88FE0F0000000001030307) Nov 23 04:46:06 localhost openstack_network_exporter[241732]: ERROR 09:46:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:46:06 localhost openstack_network_exporter[241732]: ERROR 09:46:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:46:06 localhost openstack_network_exporter[241732]: ERROR 09:46:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:46:06 localhost openstack_network_exporter[241732]: ERROR 09:46:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:46:06 localhost openstack_network_exporter[241732]: Nov 23 04:46:06 localhost openstack_network_exporter[241732]: ERROR 09:46:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:46:06 localhost openstack_network_exporter[241732]: Nov 23 04:46:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:46:09.727 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:46:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:46:09.727 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:46:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:46:09.727 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:46:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:46:12 localhost podman[284386]: 2025-11-23 09:46:12.899227558 +0000 UTC m=+0.085799427 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container) Nov 23 04:46:12 localhost podman[284386]: 2025-11-23 09:46:12.940586741 +0000 UTC m=+0.127158630 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 04:46:12 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:46:17 localhost podman[239764]: time="2025-11-23T09:46:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:46:17 localhost podman[239764]: @ - - [23/Nov/2025:09:46:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146334 "" "Go-http-client/1.1" Nov 23 04:46:17 localhost podman[239764]: @ - - [23/Nov/2025:09:46:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16761 "" "Go-http-client/1.1" Nov 23 04:46:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:46:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:46:18 localhost podman[284406]: 2025-11-23 09:46:18.895868013 +0000 UTC m=+0.080923354 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:46:18 localhost podman[284406]: 2025-11-23 09:46:18.90572612 +0000 UTC m=+0.090781431 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3) Nov 23 04:46:18 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:46:18 localhost podman[284407]: 2025-11-23 09:46:18.953266256 +0000 UTC m=+0.133219479 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:46:18 localhost podman[284407]: 2025-11-23 09:46:18.965337611 +0000 UTC m=+0.145290814 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:46:18 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:46:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28741 DF PROTO=TCP SPT=45566 DPT=9102 SEQ=1543796693 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8936B40000000001030307) Nov 23 04:46:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28742 DF PROTO=TCP SPT=45566 DPT=9102 SEQ=1543796693 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B893ACF0000000001030307) Nov 23 04:46:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45750 DF PROTO=TCP SPT=35696 DPT=9102 SEQ=702317380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B893E0F0000000001030307) Nov 23 04:46:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28743 DF PROTO=TCP SPT=45566 DPT=9102 SEQ=1543796693 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B8942CF0000000001030307) Nov 23 04:46:22 localhost sshd[284562]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:46:22 localhost systemd-logind[760]: New session 62 of user tripleo-admin. Nov 23 04:46:22 localhost systemd[1]: Created slice User Slice of UID 1003. Nov 23 04:46:23 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Nov 23 04:46:23 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Nov 23 04:46:23 localhost systemd[1]: Starting User Manager for UID 1003... Nov 23 04:46:23 localhost systemd[284566]: Queued start job for default target Main User Target. Nov 23 04:46:23 localhost systemd[284566]: Created slice User Application Slice. Nov 23 04:46:23 localhost systemd[284566]: Started Mark boot as successful after the user session has run 2 minutes. Nov 23 04:46:23 localhost systemd[284566]: Started Daily Cleanup of User's Temporary Directories. Nov 23 04:46:23 localhost systemd[284566]: Reached target Paths. Nov 23 04:46:23 localhost systemd[284566]: Reached target Timers. Nov 23 04:46:23 localhost systemd[284566]: Starting D-Bus User Message Bus Socket... Nov 23 04:46:23 localhost systemd[284566]: Starting Create User's Volatile Files and Directories... Nov 23 04:46:23 localhost systemd[284566]: Listening on D-Bus User Message Bus Socket. Nov 23 04:46:23 localhost systemd[284566]: Reached target Sockets. Nov 23 04:46:23 localhost systemd[284566]: Finished Create User's Volatile Files and Directories. Nov 23 04:46:23 localhost systemd[284566]: Reached target Basic System. Nov 23 04:46:23 localhost systemd[284566]: Reached target Main User Target. Nov 23 04:46:23 localhost systemd[284566]: Startup finished in 140ms. Nov 23 04:46:23 localhost systemd[1]: Started User Manager for UID 1003. Nov 23 04:46:23 localhost systemd[1]: Started Session 62 of User tripleo-admin. Nov 23 04:46:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:ed:8d:9e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13236 DF PROTO=TCP SPT=56506 DPT=9102 SEQ=3355021789 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A6B89460F0000000001030307) Nov 23 04:46:24 localhost python3[284710]: ansible-ansible.builtin.blockinfile Invoked with marker_begin=BEGIN ceph firewall rules marker_end=END ceph firewall rules path=/etc/nftables/edpm-rules.nft mode=0644 block=# 100 ceph_alertmanager (9093)#012add rule inet filter EDPM_INPUT tcp dport { 9093 } ct state new counter accept comment "100 ceph_alertmanager"#012# 100 ceph_dashboard (8443)#012add rule inet filter EDPM_INPUT tcp dport { 8443 } ct state new counter accept comment "100 ceph_dashboard"#012# 100 ceph_grafana (3100)#012add rule inet filter EDPM_INPUT tcp dport { 3100 } ct state new counter accept comment "100 ceph_grafana"#012# 100 ceph_prometheus (9092)#012add rule inet filter EDPM_INPUT tcp dport { 9092 } ct state new counter accept comment "100 ceph_prometheus"#012# 100 ceph_rgw (8080)#012add rule inet filter EDPM_INPUT tcp dport { 8080 } ct state new counter accept comment "100 ceph_rgw"#012# 110 ceph_mon (6789, 3300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon"#012# 112 ceph_mds (6800-7300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,9100 } ct state new counter accept comment "112 ceph_mds"#012# 113 ceph_mgr (6800-7300, 8444)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment "113 ceph_mgr"#012# 120 ceph_nfs (2049, 12049)#012add rule inet filter EDPM_INPUT tcp dport { 2049,12049 } ct state new counter accept comment "120 ceph_nfs"#012# 123 ceph_dashboard (9090, 9094, 9283)#012add rule inet filter EDPM_INPUT tcp dport { 9090,9094,9283 } ct state new counter accept comment "123 ceph_dashboard"#012 insertbefore=^# Lock down INPUT chains state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False unsafe_writes=False insertafter=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:46:24 localhost systemd-journald[47422]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 80.5 (268 of 333 items), suggesting rotation. Nov 23 04:46:24 localhost systemd-journald[47422]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 23 04:46:24 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 04:46:24 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 04:46:25 localhost python3[284855]: ansible-ansible.builtin.systemd Invoked with name=nftables state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 23 04:46:25 localhost systemd[1]: Stopping Netfilter Tables... Nov 23 04:46:25 localhost systemd[1]: nftables.service: Deactivated successfully. Nov 23 04:46:25 localhost systemd[1]: Stopped Netfilter Tables. Nov 23 04:46:25 localhost systemd[1]: Starting Netfilter Tables... Nov 23 04:46:25 localhost systemd[1]: Finished Netfilter Tables. Nov 23 04:46:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:46:26 localhost podman[284879]: 2025-11-23 09:46:26.904916407 +0000 UTC m=+0.088047383 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:46:26 localhost podman[284879]: 2025-11-23 09:46:26.916407791 +0000 UTC m=+0.099538747 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118) Nov 23 04:46:26 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:46:30 localhost podman[284898]: 2025-11-23 09:46:30.909847903 +0000 UTC m=+0.091831590 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true) Nov 23 04:46:30 localhost systemd[1]: tmp-crun.wIY2eU.mount: Deactivated successfully. Nov 23 04:46:30 localhost podman[284897]: 2025-11-23 09:46:30.966301552 +0000 UTC m=+0.149755875 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd) Nov 23 04:46:30 localhost podman[284898]: 2025-11-23 09:46:30.977390994 +0000 UTC m=+0.159375061 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Nov 23 04:46:30 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:46:31 localhost podman[284897]: 2025-11-23 09:46:31.005421647 +0000 UTC m=+0.188875960 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 04:46:31 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:46:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:46:33 localhost podman[284960]: 2025-11-23 09:46:33.543917268 +0000 UTC m=+0.076151127 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:46:33 localhost podman[284960]: 2025-11-23 09:46:33.580430423 +0000 UTC m=+0.112664232 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:46:33 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:46:36 localhost openstack_network_exporter[241732]: ERROR 09:46:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:46:36 localhost openstack_network_exporter[241732]: ERROR 09:46:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:46:36 localhost openstack_network_exporter[241732]: ERROR 09:46:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:46:36 localhost openstack_network_exporter[241732]: ERROR 09:46:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:46:36 localhost openstack_network_exporter[241732]: Nov 23 04:46:36 localhost openstack_network_exporter[241732]: ERROR 09:46:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:46:36 localhost openstack_network_exporter[241732]: Nov 23 04:46:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:46:43 localhost systemd[1]: tmp-crun.1r1RRv.mount: Deactivated successfully. Nov 23 04:46:43 localhost podman[285141]: 2025-11-23 09:46:43.904455007 +0000 UTC m=+0.090571031 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, architecture=x86_64, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-type=git, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41) Nov 23 04:46:43 localhost podman[285141]: 2025-11-23 09:46:43.916713925 +0000 UTC m=+0.102829979 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 04:46:43 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:46:46 localhost podman[285243]: Nov 23 04:46:46 localhost podman[285243]: 2025-11-23 09:46:46.242779421 +0000 UTC m=+0.075309191 container create 336f9f8f6290e0528ea06fb42daecfa43b00adb6de755919dac3fafd4a9da1d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_northcutt, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-type=git, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True) Nov 23 04:46:46 localhost systemd[1]: Started libpod-conmon-336f9f8f6290e0528ea06fb42daecfa43b00adb6de755919dac3fafd4a9da1d8.scope. Nov 23 04:46:46 localhost systemd[1]: Started libcrun container. Nov 23 04:46:46 localhost podman[285243]: 2025-11-23 09:46:46.211324923 +0000 UTC m=+0.043854713 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:46:46 localhost podman[285243]: 2025-11-23 09:46:46.316375968 +0000 UTC m=+0.148905748 container init 336f9f8f6290e0528ea06fb42daecfa43b00adb6de755919dac3fafd4a9da1d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_northcutt, maintainer=Guillaume Abrioux , release=553, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, RELEASE=main, version=7, ceph=True, GIT_BRANCH=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True) Nov 23 04:46:46 localhost podman[285243]: 2025-11-23 09:46:46.326978385 +0000 UTC m=+0.159508185 container start 336f9f8f6290e0528ea06fb42daecfa43b00adb6de755919dac3fafd4a9da1d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_northcutt, com.redhat.component=rhceph-container, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, build-date=2025-09-24T08:57:55, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, ceph=True, distribution-scope=public, io.openshift.tags=rhceph ceph) Nov 23 04:46:46 localhost podman[285243]: 2025-11-23 09:46:46.327345556 +0000 UTC m=+0.159875336 container attach 336f9f8f6290e0528ea06fb42daecfa43b00adb6de755919dac3fafd4a9da1d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_northcutt, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, RELEASE=main, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux ) Nov 23 04:46:46 localhost elated_northcutt[285258]: 167 167 Nov 23 04:46:46 localhost systemd[1]: libpod-336f9f8f6290e0528ea06fb42daecfa43b00adb6de755919dac3fafd4a9da1d8.scope: Deactivated successfully. Nov 23 04:46:46 localhost podman[285243]: 2025-11-23 09:46:46.332398382 +0000 UTC m=+0.164928162 container died 336f9f8f6290e0528ea06fb42daecfa43b00adb6de755919dac3fafd4a9da1d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_northcutt, name=rhceph, GIT_BRANCH=main, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_CLEAN=True, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, version=7, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=) Nov 23 04:46:46 localhost podman[285263]: 2025-11-23 09:46:46.436018084 +0000 UTC m=+0.088089634 container remove 336f9f8f6290e0528ea06fb42daecfa43b00adb6de755919dac3fafd4a9da1d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_northcutt, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, RELEASE=main, distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, name=rhceph) Nov 23 04:46:46 localhost systemd[1]: libpod-conmon-336f9f8f6290e0528ea06fb42daecfa43b00adb6de755919dac3fafd4a9da1d8.scope: Deactivated successfully. Nov 23 04:46:46 localhost systemd[1]: Reloading. Nov 23 04:46:46 localhost systemd-rc-local-generator[285301]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:46:46 localhost systemd-sysv-generator[285308]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:46:46 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:46:46 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:46:46 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:46:46 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:46:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:46:46 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:46:46 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:46:46 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:46:46 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:46:46 localhost systemd[1]: var-lib-containers-storage-overlay-fff3131573de4aa200a3c07d08e2e4fb48c2ce1bf910d1f5ef31dd1c12f144d3-merged.mount: Deactivated successfully. Nov 23 04:46:46 localhost systemd[1]: Reloading. Nov 23 04:46:47 localhost systemd-rc-local-generator[285346]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:46:47 localhost systemd-sysv-generator[285352]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:46:47 localhost podman[239764]: time="2025-11-23T09:46:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:46:47 localhost podman[239764]: @ - - [23/Nov/2025:09:46:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146334 "" "Go-http-client/1.1" Nov 23 04:46:47 localhost podman[239764]: @ - - [23/Nov/2025:09:46:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16758 "" "Go-http-client/1.1" Nov 23 04:46:47 localhost systemd[1]: Starting dnf makecache... Nov 23 04:46:47 localhost systemd[1]: Starting Ceph mds.mds.np0005532584.aoxjmw for 46550e70-79cb-5f55-bf6d-1204b97e083b... Nov 23 04:46:47 localhost dnf[285359]: Updating Subscription Management repositories. Nov 23 04:46:47 localhost dnf[285359]: Unable to read consumer identity Nov 23 04:46:47 localhost dnf[285359]: This system is not registered with an entitlement server. You can use subscription-manager to register. Nov 23 04:46:47 localhost dnf[285359]: Metadata cache refreshed recently. Nov 23 04:46:47 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Nov 23 04:46:47 localhost systemd[1]: Finished dnf makecache. Nov 23 04:46:47 localhost podman[285412]: Nov 23 04:46:47 localhost podman[285412]: 2025-11-23 09:46:47.638889769 +0000 UTC m=+0.080669505 container create a984491b564579e767f7568bcfa0c59e7beff17b07b223e39262168924bc8c25 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mds-mds-np0005532584-aoxjmw, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, distribution-scope=public, version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., name=rhceph, vcs-type=git, release=553, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph) Nov 23 04:46:47 localhost systemd[1]: tmp-crun.JbkAn7.mount: Deactivated successfully. Nov 23 04:46:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d0c6779efd9bb794c9637b345b7864439002ab247a00fcb40feafcf63be4b05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 23 04:46:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d0c6779efd9bb794c9637b345b7864439002ab247a00fcb40feafcf63be4b05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 04:46:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d0c6779efd9bb794c9637b345b7864439002ab247a00fcb40feafcf63be4b05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 04:46:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d0c6779efd9bb794c9637b345b7864439002ab247a00fcb40feafcf63be4b05/merged/var/lib/ceph/mds/ceph-mds.np0005532584.aoxjmw supports timestamps until 2038 (0x7fffffff) Nov 23 04:46:47 localhost podman[285412]: 2025-11-23 09:46:47.705219763 +0000 UTC m=+0.146999499 container init a984491b564579e767f7568bcfa0c59e7beff17b07b223e39262168924bc8c25 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mds-mds-np0005532584-aoxjmw, architecture=x86_64, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, release=553, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55) Nov 23 04:46:47 localhost podman[285412]: 2025-11-23 09:46:47.607200243 +0000 UTC m=+0.048980019 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:46:47 localhost podman[285412]: 2025-11-23 09:46:47.713340333 +0000 UTC m=+0.155120059 container start a984491b564579e767f7568bcfa0c59e7beff17b07b223e39262168924bc8c25 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mds-mds-np0005532584-aoxjmw, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vcs-type=git, maintainer=Guillaume Abrioux , distribution-scope=public, CEPH_POINT_RELEASE=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.component=rhceph-container, ceph=True) Nov 23 04:46:47 localhost bash[285412]: a984491b564579e767f7568bcfa0c59e7beff17b07b223e39262168924bc8c25 Nov 23 04:46:47 localhost systemd[1]: Started Ceph mds.mds.np0005532584.aoxjmw for 46550e70-79cb-5f55-bf6d-1204b97e083b. Nov 23 04:46:47 localhost ceph-mds[285431]: set uid:gid to 167:167 (ceph:ceph) Nov 23 04:46:47 localhost ceph-mds[285431]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mds, pid 2 Nov 23 04:46:47 localhost ceph-mds[285431]: main not setting numa affinity Nov 23 04:46:47 localhost ceph-mds[285431]: pidfile_write: ignore empty --pid-file Nov 23 04:46:47 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mds-mds-np0005532584-aoxjmw[285427]: starting mds.mds.np0005532584.aoxjmw at Nov 23 04:46:47 localhost ceph-mds[285431]: mds.mds.np0005532584.aoxjmw Updating MDS map to version 8 from mon.0 Nov 23 04:46:48 localhost ceph-mds[285431]: mds.mds.np0005532584.aoxjmw Updating MDS map to version 9 from mon.0 Nov 23 04:46:48 localhost ceph-mds[285431]: mds.mds.np0005532584.aoxjmw Monitors have assigned me to become a standby. Nov 23 04:46:48 localhost systemd-logind[760]: Session 61 logged out. Waiting for processes to exit. Nov 23 04:46:48 localhost systemd[1]: session-61.scope: Deactivated successfully. Nov 23 04:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:46:48 localhost systemd-logind[760]: Removed session 61. Nov 23 04:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:46:49 localhost systemd[1]: tmp-crun.Ecnrjn.mount: Deactivated successfully. Nov 23 04:46:49 localhost podman[285533]: 2025-11-23 09:46:49.128061355 +0000 UTC m=+0.131666037 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 23 04:46:49 localhost podman[285533]: 2025-11-23 09:46:49.138109035 +0000 UTC m=+0.141713737 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 04:46:49 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:46:49 localhost podman[285540]: 2025-11-23 09:46:49.196120132 +0000 UTC m=+0.194847803 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:46:49 localhost podman[285540]: 2025-11-23 09:46:49.234390081 +0000 UTC m=+0.233117782 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:46:49 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:46:49 localhost systemd[1]: tmp-crun.61EBCo.mount: Deactivated successfully. Nov 23 04:46:49 localhost podman[285620]: 2025-11-23 09:46:49.416166801 +0000 UTC m=+0.098133394 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., version=7, io.openshift.tags=rhceph ceph, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, release=553, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12) Nov 23 04:46:49 localhost podman[285620]: 2025-11-23 09:46:49.547546818 +0000 UTC m=+0.229513421 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, release=553, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, version=7, GIT_CLEAN=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public) Nov 23 04:46:57 localhost nova_compute[280939]: 2025-11-23 09:46:57.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:46:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:46:57 localhost systemd[1]: tmp-crun.49lGKW.mount: Deactivated successfully. Nov 23 04:46:57 localhost podman[285742]: 2025-11-23 09:46:57.896997242 +0000 UTC m=+0.083083790 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Nov 23 04:46:57 localhost podman[285742]: 2025-11-23 09:46:57.928169232 +0000 UTC m=+0.114255780 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Nov 23 04:46:57 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:46:59 localhost nova_compute[280939]: 2025-11-23 09:46:59.128 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:46:59 localhost nova_compute[280939]: 2025-11-23 09:46:59.142 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:46:59 localhost nova_compute[280939]: 2025-11-23 09:46:59.143 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.151 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.152 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.152 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.153 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.181 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.182 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.182 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.182 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.183 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.633 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.851 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.852 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12829MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.853 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.853 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.926 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.927 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:47:00 localhost nova_compute[280939]: 2025-11-23 09:47:00.953 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:47:01 localhost nova_compute[280939]: 2025-11-23 09:47:01.408 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:47:01 localhost nova_compute[280939]: 2025-11-23 09:47:01.414 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:47:01 localhost nova_compute[280939]: 2025-11-23 09:47:01.430 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:47:01 localhost nova_compute[280939]: 2025-11-23 09:47:01.433 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:47:01 localhost nova_compute[280939]: 2025-11-23 09:47:01.433 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:47:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:47:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:47:01 localhost systemd[1]: tmp-crun.ZEEAFf.mount: Deactivated successfully. Nov 23 04:47:01 localhost podman[285805]: 2025-11-23 09:47:01.932317253 +0000 UTC m=+0.076370264 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2) Nov 23 04:47:01 localhost podman[285805]: 2025-11-23 09:47:01.946291724 +0000 UTC m=+0.090344735 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:47:01 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:47:02 localhost podman[285806]: 2025-11-23 09:47:02.040385602 +0000 UTC m=+0.182263546 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS) Nov 23 04:47:02 localhost podman[285806]: 2025-11-23 09:47:02.078353942 +0000 UTC m=+0.220231836 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 04:47:02 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:47:02 localhost sshd[285848]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:47:02 localhost nova_compute[280939]: 2025-11-23 09:47:02.414 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:47:02 localhost nova_compute[280939]: 2025-11-23 09:47:02.414 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:47:02 localhost nova_compute[280939]: 2025-11-23 09:47:02.415 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:47:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:47:03 localhost podman[285850]: 2025-11-23 09:47:03.892690674 +0000 UTC m=+0.077315082 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:47:03 localhost podman[285850]: 2025-11-23 09:47:03.902070053 +0000 UTC m=+0.086694481 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 04:47:03 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:47:06 localhost openstack_network_exporter[241732]: ERROR 09:47:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:47:06 localhost openstack_network_exporter[241732]: ERROR 09:47:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:47:06 localhost openstack_network_exporter[241732]: ERROR 09:47:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:47:06 localhost openstack_network_exporter[241732]: ERROR 09:47:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:47:06 localhost openstack_network_exporter[241732]: Nov 23 04:47:06 localhost openstack_network_exporter[241732]: ERROR 09:47:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:47:06 localhost openstack_network_exporter[241732]: Nov 23 04:47:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:47:09.728 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:47:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:47:09.728 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:47:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:47:09.729 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.574 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:47:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:47:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:47:14 localhost systemd[1]: tmp-crun.feduzi.mount: Deactivated successfully. Nov 23 04:47:14 localhost podman[285874]: 2025-11-23 09:47:14.908014151 +0000 UTC m=+0.094993067 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Nov 23 04:47:14 localhost podman[285874]: 2025-11-23 09:47:14.918638597 +0000 UTC m=+0.105617473 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, com.redhat.component=ubi9-minimal-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 04:47:14 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:47:17 localhost podman[239764]: time="2025-11-23T09:47:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:47:17 localhost podman[239764]: @ - - [23/Nov/2025:09:47:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148412 "" "Go-http-client/1.1" Nov 23 04:47:17 localhost podman[239764]: @ - - [23/Nov/2025:09:47:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17236 "" "Go-http-client/1.1" Nov 23 04:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:47:19 localhost podman[285895]: 2025-11-23 09:47:19.908699502 +0000 UTC m=+0.094843512 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS) Nov 23 04:47:19 localhost podman[285896]: 2025-11-23 09:47:19.964247443 +0000 UTC m=+0.147046120 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:47:19 localhost podman[285895]: 2025-11-23 09:47:19.994033921 +0000 UTC m=+0.180177981 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0) Nov 23 04:47:20 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:47:20 localhost podman[285896]: 2025-11-23 09:47:20.051660746 +0000 UTC m=+0.234459413 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:47:20 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:47:25 localhost systemd[1]: session-62.scope: Deactivated successfully. Nov 23 04:47:25 localhost systemd[1]: session-62.scope: Consumed 1.264s CPU time. Nov 23 04:47:25 localhost systemd-logind[760]: Session 62 logged out. Waiting for processes to exit. Nov 23 04:47:25 localhost systemd-logind[760]: Removed session 62. Nov 23 04:47:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:47:28 localhost systemd[1]: tmp-crun.b8Zrmj.mount: Deactivated successfully. Nov 23 04:47:28 localhost podman[285956]: 2025-11-23 09:47:28.902075443 +0000 UTC m=+0.088099870 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 04:47:28 localhost podman[285956]: 2025-11-23 09:47:28.932753547 +0000 UTC m=+0.118777974 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent) Nov 23 04:47:28 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:47:32 localhost podman[285974]: 2025-11-23 09:47:32.897554792 +0000 UTC m=+0.084046146 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 04:47:32 localhost podman[285974]: 2025-11-23 09:47:32.907663323 +0000 UTC m=+0.094154667 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Nov 23 04:47:32 localhost systemd[1]: tmp-crun.1TMY1u.mount: Deactivated successfully. Nov 23 04:47:32 localhost podman[285975]: 2025-11-23 09:47:32.95503998 +0000 UTC m=+0.139538302 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:47:32 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:47:32 localhost podman[285975]: 2025-11-23 09:47:32.991343847 +0000 UTC m=+0.175842189 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller) Nov 23 04:47:33 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:47:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:47:34 localhost podman[286018]: 2025-11-23 09:47:34.892570823 +0000 UTC m=+0.079478285 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:47:34 localhost podman[286018]: 2025-11-23 09:47:34.904432849 +0000 UTC m=+0.091340301 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:47:34 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:47:35 localhost systemd[1]: Stopping User Manager for UID 1003... Nov 23 04:47:35 localhost systemd[284566]: Activating special unit Exit the Session... Nov 23 04:47:35 localhost systemd[284566]: Stopped target Main User Target. Nov 23 04:47:35 localhost systemd[284566]: Stopped target Basic System. Nov 23 04:47:35 localhost systemd[284566]: Stopped target Paths. Nov 23 04:47:35 localhost systemd[284566]: Stopped target Sockets. Nov 23 04:47:35 localhost systemd[284566]: Stopped target Timers. Nov 23 04:47:35 localhost systemd[284566]: Stopped Mark boot as successful after the user session has run 2 minutes. Nov 23 04:47:35 localhost systemd[284566]: Stopped Daily Cleanup of User's Temporary Directories. Nov 23 04:47:35 localhost systemd[284566]: Closed D-Bus User Message Bus Socket. Nov 23 04:47:35 localhost systemd[284566]: Stopped Create User's Volatile Files and Directories. Nov 23 04:47:35 localhost systemd[284566]: Removed slice User Application Slice. Nov 23 04:47:35 localhost systemd[284566]: Reached target Shutdown. Nov 23 04:47:35 localhost systemd[284566]: Finished Exit the Session. Nov 23 04:47:35 localhost systemd[284566]: Reached target Exit the Session. Nov 23 04:47:35 localhost systemd[1]: user@1003.service: Deactivated successfully. Nov 23 04:47:35 localhost systemd[1]: Stopped User Manager for UID 1003. Nov 23 04:47:35 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Nov 23 04:47:35 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Nov 23 04:47:35 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Nov 23 04:47:35 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Nov 23 04:47:35 localhost systemd[1]: Removed slice User Slice of UID 1003. Nov 23 04:47:35 localhost systemd[1]: user-1003.slice: Consumed 1.661s CPU time. Nov 23 04:47:36 localhost openstack_network_exporter[241732]: ERROR 09:47:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:47:36 localhost openstack_network_exporter[241732]: ERROR 09:47:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:47:36 localhost openstack_network_exporter[241732]: ERROR 09:47:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:47:36 localhost openstack_network_exporter[241732]: ERROR 09:47:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:47:36 localhost openstack_network_exporter[241732]: Nov 23 04:47:36 localhost openstack_network_exporter[241732]: ERROR 09:47:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:47:36 localhost openstack_network_exporter[241732]: Nov 23 04:47:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:47:45 localhost podman[286165]: 2025-11-23 09:47:45.895727103 +0000 UTC m=+0.080504747 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, name=ubi9-minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, config_id=edpm, release=1755695350) Nov 23 04:47:45 localhost podman[286165]: 2025-11-23 09:47:45.908762524 +0000 UTC m=+0.093540168 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers) Nov 23 04:47:45 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:47:47 localhost podman[239764]: time="2025-11-23T09:47:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:47:47 localhost podman[239764]: @ - - [23/Nov/2025:09:47:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148412 "" "Go-http-client/1.1" Nov 23 04:47:47 localhost podman[239764]: @ - - [23/Nov/2025:09:47:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17246 "" "Go-http-client/1.1" Nov 23 04:47:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:47:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:47:50 localhost podman[286183]: 2025-11-23 09:47:50.898426027 +0000 UTC m=+0.083196370 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Nov 23 04:47:50 localhost podman[286183]: 2025-11-23 09:47:50.909431806 +0000 UTC m=+0.094202169 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0) Nov 23 04:47:50 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:47:51 localhost podman[286184]: 2025-11-23 09:47:50.999771734 +0000 UTC m=+0.180879385 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:47:51 localhost podman[286184]: 2025-11-23 09:47:51.012403232 +0000 UTC m=+0.193510933 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:47:51 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:47:56 localhost nova_compute[280939]: 2025-11-23 09:47:56.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:47:56 localhost nova_compute[280939]: 2025-11-23 09:47:56.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Nov 23 04:47:56 localhost nova_compute[280939]: 2025-11-23 09:47:56.151 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Nov 23 04:47:56 localhost nova_compute[280939]: 2025-11-23 09:47:56.152 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:47:56 localhost nova_compute[280939]: 2025-11-23 09:47:56.152 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Nov 23 04:47:56 localhost nova_compute[280939]: 2025-11-23 09:47:56.164 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:47:58 localhost nova_compute[280939]: 2025-11-23 09:47:58.173 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:47:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:47:59 localhost systemd[1]: tmp-crun.pGJFyi.mount: Deactivated successfully. Nov 23 04:47:59 localhost podman[286227]: 2025-11-23 09:47:59.896151565 +0000 UTC m=+0.086285115 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 04:47:59 localhost podman[286227]: 2025-11-23 09:47:59.930492331 +0000 UTC m=+0.120625931 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 04:47:59 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:48:00 localhost nova_compute[280939]: 2025-11-23 09:48:00.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:48:00 localhost nova_compute[280939]: 2025-11-23 09:48:00.134 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:48:00 localhost nova_compute[280939]: 2025-11-23 09:48:00.134 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:48:00 localhost nova_compute[280939]: 2025-11-23 09:48:00.154 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:48:00 localhost nova_compute[280939]: 2025-11-23 09:48:00.154 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:48:01 localhost nova_compute[280939]: 2025-11-23 09:48:01.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:48:01 localhost nova_compute[280939]: 2025-11-23 09:48:01.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:48:01 localhost nova_compute[280939]: 2025-11-23 09:48:01.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:48:01 localhost nova_compute[280939]: 2025-11-23 09:48:01.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:48:02 localhost nova_compute[280939]: 2025-11-23 09:48:02.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:48:02 localhost nova_compute[280939]: 2025-11-23 09:48:02.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:48:02 localhost nova_compute[280939]: 2025-11-23 09:48:02.134 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:48:02 localhost nova_compute[280939]: 2025-11-23 09:48:02.158 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:48:02 localhost nova_compute[280939]: 2025-11-23 09:48:02.159 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:48:02 localhost nova_compute[280939]: 2025-11-23 09:48:02.159 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:48:02 localhost nova_compute[280939]: 2025-11-23 09:48:02.160 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:48:02 localhost nova_compute[280939]: 2025-11-23 09:48:02.160 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:48:02 localhost nova_compute[280939]: 2025-11-23 09:48:02.628 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:48:02 localhost nova_compute[280939]: 2025-11-23 09:48:02.826 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:48:02 localhost nova_compute[280939]: 2025-11-23 09:48:02.827 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12852MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:48:02 localhost nova_compute[280939]: 2025-11-23 09:48:02.827 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:48:02 localhost nova_compute[280939]: 2025-11-23 09:48:02.828 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:48:02 localhost nova_compute[280939]: 2025-11-23 09:48:02.949 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:48:02 localhost nova_compute[280939]: 2025-11-23 09:48:02.950 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:48:02 localhost nova_compute[280939]: 2025-11-23 09:48:02.998 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Refreshing inventories for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 23 04:48:03 localhost nova_compute[280939]: 2025-11-23 09:48:03.051 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Updating ProviderTree inventory for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 23 04:48:03 localhost nova_compute[280939]: 2025-11-23 09:48:03.052 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Updating inventory in ProviderTree for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 23 04:48:03 localhost nova_compute[280939]: 2025-11-23 09:48:03.070 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Refreshing aggregate associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 23 04:48:03 localhost nova_compute[280939]: 2025-11-23 09:48:03.092 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Refreshing trait associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 23 04:48:03 localhost nova_compute[280939]: 2025-11-23 09:48:03.117 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:48:03 localhost nova_compute[280939]: 2025-11-23 09:48:03.590 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:48:03 localhost nova_compute[280939]: 2025-11-23 09:48:03.595 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:48:03 localhost nova_compute[280939]: 2025-11-23 09:48:03.615 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:48:03 localhost nova_compute[280939]: 2025-11-23 09:48:03.617 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:48:03 localhost nova_compute[280939]: 2025-11-23 09:48:03.617 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:48:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:48:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:48:03 localhost podman[286290]: 2025-11-23 09:48:03.898095993 +0000 UTC m=+0.085071106 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Nov 23 04:48:03 localhost podman[286289]: 2025-11-23 09:48:03.983571542 +0000 UTC m=+0.172662741 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 23 04:48:04 localhost podman[286290]: 2025-11-23 09:48:04.016459084 +0000 UTC m=+0.203434157 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:48:04 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:48:04 localhost podman[286289]: 2025-11-23 09:48:04.073887699 +0000 UTC m=+0.262978878 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Nov 23 04:48:04 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:48:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:48:05 localhost podman[286332]: 2025-11-23 09:48:05.893103613 +0000 UTC m=+0.079642860 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:48:05 localhost podman[286332]: 2025-11-23 09:48:05.899731798 +0000 UTC m=+0.086271045 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:48:05 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:48:06 localhost openstack_network_exporter[241732]: ERROR 09:48:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:48:06 localhost openstack_network_exporter[241732]: ERROR 09:48:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:48:06 localhost openstack_network_exporter[241732]: ERROR 09:48:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:48:06 localhost openstack_network_exporter[241732]: ERROR 09:48:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:48:06 localhost openstack_network_exporter[241732]: Nov 23 04:48:06 localhost openstack_network_exporter[241732]: ERROR 09:48:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:48:06 localhost openstack_network_exporter[241732]: Nov 23 04:48:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:48:09.729 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:48:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:48:09.729 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:48:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:48:09.729 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:48:11 localhost podman[286489]: Nov 23 04:48:11 localhost podman[286489]: 2025-11-23 09:48:11.657103379 +0000 UTC m=+0.081604090 container create 299bcafc95ce3ad11159082665d6894c3e94beb5161ed72219dd007dddd4b976 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_herschel, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.expose-services=, name=rhceph, version=7, architecture=x86_64, distribution-scope=public, release=553, RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_CLEAN=True, ceph=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55) Nov 23 04:48:11 localhost systemd[1]: Started libpod-conmon-299bcafc95ce3ad11159082665d6894c3e94beb5161ed72219dd007dddd4b976.scope. Nov 23 04:48:11 localhost systemd[1]: Started libcrun container. Nov 23 04:48:11 localhost podman[286489]: 2025-11-23 09:48:11.621736431 +0000 UTC m=+0.046237192 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:48:11 localhost podman[286489]: 2025-11-23 09:48:11.731259729 +0000 UTC m=+0.155760440 container init 299bcafc95ce3ad11159082665d6894c3e94beb5161ed72219dd007dddd4b976 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_herschel, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, release=553, io.buildah.version=1.33.12, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container) Nov 23 04:48:11 localhost podman[286489]: 2025-11-23 09:48:11.740519084 +0000 UTC m=+0.165019795 container start 299bcafc95ce3ad11159082665d6894c3e94beb5161ed72219dd007dddd4b976 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_herschel, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_BRANCH=main, release=553, vcs-type=git, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, version=7, io.openshift.expose-services=, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:48:11 localhost podman[286489]: 2025-11-23 09:48:11.740756532 +0000 UTC m=+0.165257243 container attach 299bcafc95ce3ad11159082665d6894c3e94beb5161ed72219dd007dddd4b976 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_herschel, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.expose-services=, release=553, GIT_BRANCH=main, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux , distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, ceph=True) Nov 23 04:48:11 localhost inspiring_herschel[286504]: 167 167 Nov 23 04:48:11 localhost systemd[1]: libpod-299bcafc95ce3ad11159082665d6894c3e94beb5161ed72219dd007dddd4b976.scope: Deactivated successfully. Nov 23 04:48:11 localhost podman[286489]: 2025-11-23 09:48:11.74461611 +0000 UTC m=+0.169116831 container died 299bcafc95ce3ad11159082665d6894c3e94beb5161ed72219dd007dddd4b976 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_herschel, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, version=7, release=553, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-type=git, maintainer=Guillaume Abrioux , ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:48:11 localhost podman[286509]: 2025-11-23 09:48:11.842081217 +0000 UTC m=+0.084708925 container remove 299bcafc95ce3ad11159082665d6894c3e94beb5161ed72219dd007dddd4b976 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_herschel, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, RELEASE=main, version=7, release=553, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 04:48:11 localhost systemd[1]: libpod-conmon-299bcafc95ce3ad11159082665d6894c3e94beb5161ed72219dd007dddd4b976.scope: Deactivated successfully. Nov 23 04:48:11 localhost systemd[1]: Reloading. Nov 23 04:48:12 localhost systemd-rc-local-generator[286551]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:48:12 localhost systemd-sysv-generator[286554]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:12 localhost systemd[1]: var-lib-containers-storage-overlay-20be04d971960a8b9fd3df6f6c73a918365b2d7417ac7ed86a7ff94d3072adef-merged.mount: Deactivated successfully. Nov 23 04:48:12 localhost systemd[1]: Reloading. Nov 23 04:48:12 localhost systemd-sysv-generator[286597]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:48:12 localhost systemd-rc-local-generator[286593]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:12 localhost systemd[1]: Starting Ceph mgr.np0005532584.naxwxy for 46550e70-79cb-5f55-bf6d-1204b97e083b... Nov 23 04:48:12 localhost podman[286654]: Nov 23 04:48:12 localhost podman[286654]: 2025-11-23 09:48:12.96402629 +0000 UTC m=+0.080364052 container create 14930fe36f37d5ce4aa24ba9b1b0a698ed712475eb78061ffbb53b7e9240fa15 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy, name=rhceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.expose-services=, vcs-type=git, build-date=2025-09-24T08:57:55, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, release=553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:48:12 localhost systemd[1]: tmp-crun.y7ETUW.mount: Deactivated successfully. Nov 23 04:48:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b787e5c58652df86ae7860bdac2991906a3f1d14655818b4aaababf94b2b7634/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 23 04:48:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b787e5c58652df86ae7860bdac2991906a3f1d14655818b4aaababf94b2b7634/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 04:48:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b787e5c58652df86ae7860bdac2991906a3f1d14655818b4aaababf94b2b7634/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 04:48:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b787e5c58652df86ae7860bdac2991906a3f1d14655818b4aaababf94b2b7634/merged/var/lib/ceph/mgr/ceph-np0005532584.naxwxy supports timestamps until 2038 (0x7fffffff) Nov 23 04:48:13 localhost podman[286654]: 2025-11-23 09:48:12.932737837 +0000 UTC m=+0.049075619 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:48:13 localhost podman[286654]: 2025-11-23 09:48:13.032032551 +0000 UTC m=+0.148370333 container init 14930fe36f37d5ce4aa24ba9b1b0a698ed712475eb78061ffbb53b7e9240fa15 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, distribution-scope=public, version=7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-09-24T08:57:55, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:48:13 localhost podman[286654]: 2025-11-23 09:48:13.040025427 +0000 UTC m=+0.156363139 container start 14930fe36f37d5ce4aa24ba9b1b0a698ed712475eb78061ffbb53b7e9240fa15 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy, io.buildah.version=1.33.12, RELEASE=main, version=7, vendor=Red Hat, Inc., name=rhceph, CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, release=553, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:48:13 localhost bash[286654]: 14930fe36f37d5ce4aa24ba9b1b0a698ed712475eb78061ffbb53b7e9240fa15 Nov 23 04:48:13 localhost systemd[1]: Started Ceph mgr.np0005532584.naxwxy for 46550e70-79cb-5f55-bf6d-1204b97e083b. Nov 23 04:48:13 localhost ceph-mgr[286671]: set uid:gid to 167:167 (ceph:ceph) Nov 23 04:48:13 localhost ceph-mgr[286671]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Nov 23 04:48:13 localhost ceph-mgr[286671]: pidfile_write: ignore empty --pid-file Nov 23 04:48:13 localhost ceph-mgr[286671]: mgr[py] Loading python module 'alerts' Nov 23 04:48:13 localhost ceph-mgr[286671]: mgr[py] Module alerts has missing NOTIFY_TYPES member Nov 23 04:48:13 localhost ceph-mgr[286671]: mgr[py] Loading python module 'balancer' Nov 23 04:48:13 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:13.206+0000 7ffa4f7a7140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Nov 23 04:48:13 localhost ceph-mgr[286671]: mgr[py] Module balancer has missing NOTIFY_TYPES member Nov 23 04:48:13 localhost ceph-mgr[286671]: mgr[py] Loading python module 'cephadm' Nov 23 04:48:13 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:13.270+0000 7ffa4f7a7140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Nov 23 04:48:13 localhost ceph-mgr[286671]: mgr[py] Loading python module 'crash' Nov 23 04:48:13 localhost ceph-mgr[286671]: mgr[py] Module crash has missing NOTIFY_TYPES member Nov 23 04:48:13 localhost ceph-mgr[286671]: mgr[py] Loading python module 'dashboard' Nov 23 04:48:13 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:13.881+0000 7ffa4f7a7140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Nov 23 04:48:14 localhost ceph-mgr[286671]: mgr[py] Loading python module 'devicehealth' Nov 23 04:48:14 localhost ceph-mgr[286671]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Nov 23 04:48:14 localhost ceph-mgr[286671]: mgr[py] Loading python module 'diskprediction_local' Nov 23 04:48:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:14.407+0000 7ffa4f7a7140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Nov 23 04:48:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Nov 23 04:48:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Nov 23 04:48:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: from numpy import show_config as show_numpy_config Nov 23 04:48:14 localhost ceph-mgr[286671]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Nov 23 04:48:14 localhost ceph-mgr[286671]: mgr[py] Loading python module 'influx' Nov 23 04:48:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:14.535+0000 7ffa4f7a7140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Nov 23 04:48:14 localhost ceph-mgr[286671]: mgr[py] Module influx has missing NOTIFY_TYPES member Nov 23 04:48:14 localhost ceph-mgr[286671]: mgr[py] Loading python module 'insights' Nov 23 04:48:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:14.592+0000 7ffa4f7a7140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Nov 23 04:48:14 localhost ceph-mgr[286671]: mgr[py] Loading python module 'iostat' Nov 23 04:48:14 localhost ceph-mgr[286671]: mgr[py] Module iostat has missing NOTIFY_TYPES member Nov 23 04:48:14 localhost ceph-mgr[286671]: mgr[py] Loading python module 'k8sevents' Nov 23 04:48:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:14.702+0000 7ffa4f7a7140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Nov 23 04:48:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'localpool' Nov 23 04:48:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'mds_autoscaler' Nov 23 04:48:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'mirroring' Nov 23 04:48:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'nfs' Nov 23 04:48:15 localhost ceph-mgr[286671]: mgr[py] Module nfs has missing NOTIFY_TYPES member Nov 23 04:48:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:15.443+0000 7ffa4f7a7140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Nov 23 04:48:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'orchestrator' Nov 23 04:48:15 localhost ceph-mgr[286671]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Nov 23 04:48:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'osd_perf_query' Nov 23 04:48:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:15.588+0000 7ffa4f7a7140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Nov 23 04:48:15 localhost ceph-mgr[286671]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Nov 23 04:48:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'osd_support' Nov 23 04:48:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:15.655+0000 7ffa4f7a7140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Nov 23 04:48:15 localhost ceph-mgr[286671]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Nov 23 04:48:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'pg_autoscaler' Nov 23 04:48:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:15.710+0000 7ffa4f7a7140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Nov 23 04:48:15 localhost ceph-mgr[286671]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Nov 23 04:48:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'progress' Nov 23 04:48:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:15.775+0000 7ffa4f7a7140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Nov 23 04:48:15 localhost ceph-mgr[286671]: mgr[py] Module progress has missing NOTIFY_TYPES member Nov 23 04:48:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'prometheus' Nov 23 04:48:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:15.834+0000 7ffa4f7a7140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Nov 23 04:48:16 localhost ceph-mgr[286671]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Nov 23 04:48:16 localhost ceph-mgr[286671]: mgr[py] Loading python module 'rbd_support' Nov 23 04:48:16 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:16.129+0000 7ffa4f7a7140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Nov 23 04:48:16 localhost ceph-mgr[286671]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Nov 23 04:48:16 localhost ceph-mgr[286671]: mgr[py] Loading python module 'restful' Nov 23 04:48:16 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:16.209+0000 7ffa4f7a7140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Nov 23 04:48:16 localhost ceph-mgr[286671]: mgr[py] Loading python module 'rgw' Nov 23 04:48:16 localhost ceph-mgr[286671]: mgr[py] Module rgw has missing NOTIFY_TYPES member Nov 23 04:48:16 localhost ceph-mgr[286671]: mgr[py] Loading python module 'rook' Nov 23 04:48:16 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:16.538+0000 7ffa4f7a7140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Nov 23 04:48:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:48:16 localhost systemd[1]: tmp-crun.cQXSMU.mount: Deactivated successfully. Nov 23 04:48:16 localhost podman[286701]: 2025-11-23 09:48:16.90752448 +0000 UTC m=+0.091669340 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, name=ubi9-minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, release=1755695350, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 04:48:16 localhost podman[286701]: 2025-11-23 09:48:16.923415179 +0000 UTC m=+0.107560049 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-type=git, version=9.6) Nov 23 04:48:16 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:48:16 localhost ceph-mgr[286671]: mgr[py] Module rook has missing NOTIFY_TYPES member Nov 23 04:48:16 localhost ceph-mgr[286671]: mgr[py] Loading python module 'selftest' Nov 23 04:48:16 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:16.979+0000 7ffa4f7a7140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Nov 23 04:48:17 localhost ceph-mgr[286671]: mgr[py] Module selftest has missing NOTIFY_TYPES member Nov 23 04:48:17 localhost ceph-mgr[286671]: mgr[py] Loading python module 'snap_schedule' Nov 23 04:48:17 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:17.039+0000 7ffa4f7a7140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Nov 23 04:48:17 localhost podman[239764]: time="2025-11-23T09:48:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:48:17 localhost ceph-mgr[286671]: mgr[py] Loading python module 'stats' Nov 23 04:48:17 localhost podman[239764]: @ - - [23/Nov/2025:09:48:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150478 "" "Go-http-client/1.1" Nov 23 04:48:17 localhost podman[239764]: @ - - [23/Nov/2025:09:48:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17731 "" "Go-http-client/1.1" Nov 23 04:48:17 localhost ceph-mgr[286671]: mgr[py] Loading python module 'status' Nov 23 04:48:17 localhost ceph-mgr[286671]: mgr[py] Module status has missing NOTIFY_TYPES member Nov 23 04:48:17 localhost ceph-mgr[286671]: mgr[py] Loading python module 'telegraf' Nov 23 04:48:17 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:17.235+0000 7ffa4f7a7140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Nov 23 04:48:17 localhost ceph-mgr[286671]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Nov 23 04:48:17 localhost ceph-mgr[286671]: mgr[py] Loading python module 'telemetry' Nov 23 04:48:17 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:17.295+0000 7ffa4f7a7140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Nov 23 04:48:17 localhost ceph-mgr[286671]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Nov 23 04:48:17 localhost ceph-mgr[286671]: mgr[py] Loading python module 'test_orchestrator' Nov 23 04:48:17 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:17.424+0000 7ffa4f7a7140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Nov 23 04:48:17 localhost ceph-mgr[286671]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Nov 23 04:48:17 localhost ceph-mgr[286671]: mgr[py] Loading python module 'volumes' Nov 23 04:48:17 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:17.568+0000 7ffa4f7a7140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Nov 23 04:48:17 localhost ceph-mgr[286671]: mgr[py] Module volumes has missing NOTIFY_TYPES member Nov 23 04:48:17 localhost ceph-mgr[286671]: mgr[py] Loading python module 'zabbix' Nov 23 04:48:17 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:17.751+0000 7ffa4f7a7140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Nov 23 04:48:17 localhost ceph-mgr[286671]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Nov 23 04:48:17 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:48:17.808+0000 7ffa4f7a7140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Nov 23 04:48:17 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55cf28e191e0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Nov 23 04:48:17 localhost ceph-mgr[286671]: client.0 ms_handle_reset on v2:172.18.0.103:6800/1698075890 Nov 23 04:48:19 localhost podman[286848]: 2025-11-23 09:48:19.394454659 +0000 UTC m=+0.088940106 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, RELEASE=main, build-date=2025-09-24T08:57:55, release=553, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, version=7, ceph=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:48:19 localhost podman[286848]: 2025-11-23 09:48:19.520515366 +0000 UTC m=+0.215000823 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, release=553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:48:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:48:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:48:21 localhost podman[286984]: 2025-11-23 09:48:21.170802735 +0000 UTC m=+0.086839262 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 23 04:48:21 localhost podman[286984]: 2025-11-23 09:48:21.183545637 +0000 UTC m=+0.099582144 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, config_id=edpm) Nov 23 04:48:21 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:48:21 localhost systemd[1]: tmp-crun.pjs7wk.mount: Deactivated successfully. Nov 23 04:48:21 localhost podman[286985]: 2025-11-23 09:48:21.276675021 +0000 UTC m=+0.191494561 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:48:21 localhost podman[286985]: 2025-11-23 09:48:21.288379621 +0000 UTC m=+0.203199161 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:48:21 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:48:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:48:30 localhost systemd[1]: tmp-crun.5EnxhQ.mount: Deactivated successfully. Nov 23 04:48:30 localhost podman[287717]: 2025-11-23 09:48:30.320008047 +0000 UTC m=+0.087483333 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118) Nov 23 04:48:30 localhost podman[287717]: 2025-11-23 09:48:30.350482825 +0000 UTC m=+0.117958081 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:48:30 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:48:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:48:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:48:34 localhost systemd[1]: tmp-crun.bUSF6J.mount: Deactivated successfully. Nov 23 04:48:34 localhost podman[287735]: 2025-11-23 09:48:34.881319376 +0000 UTC m=+0.076010480 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:48:34 localhost systemd[1]: tmp-crun.QAdCmg.mount: Deactivated successfully. Nov 23 04:48:34 localhost podman[287735]: 2025-11-23 09:48:34.898735272 +0000 UTC m=+0.093426336 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true) Nov 23 04:48:34 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:48:34 localhost podman[287736]: 2025-11-23 09:48:34.902071905 +0000 UTC m=+0.088995390 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0) Nov 23 04:48:34 localhost podman[287736]: 2025-11-23 09:48:34.990461075 +0000 UTC m=+0.177384520 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 23 04:48:35 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:48:35 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55cf28e191e0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Nov 23 04:48:36 localhost openstack_network_exporter[241732]: ERROR 09:48:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:48:36 localhost openstack_network_exporter[241732]: ERROR 09:48:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:48:36 localhost openstack_network_exporter[241732]: ERROR 09:48:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:48:36 localhost openstack_network_exporter[241732]: ERROR 09:48:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:48:36 localhost openstack_network_exporter[241732]: Nov 23 04:48:36 localhost openstack_network_exporter[241732]: ERROR 09:48:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:48:36 localhost openstack_network_exporter[241732]: Nov 23 04:48:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:48:36 localhost podman[287778]: 2025-11-23 09:48:36.919390195 +0000 UTC m=+0.098573565 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:48:36 localhost podman[287778]: 2025-11-23 09:48:36.954376201 +0000 UTC m=+0.133559541 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:48:36 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:48:39 localhost ceph-mds[285431]: mds.beacon.mds.np0005532584.aoxjmw missed beacon ack from the monitors Nov 23 04:48:41 localhost podman[287881]: Nov 23 04:48:41 localhost podman[287881]: 2025-11-23 09:48:41.35549189 +0000 UTC m=+0.076981250 container create 12be8f40282d4338f8027b59bc83363d30665b3e053b072177c24629bb1f7657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_allen, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-type=git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, name=rhceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 23 04:48:41 localhost systemd[1]: Started libpod-conmon-12be8f40282d4338f8027b59bc83363d30665b3e053b072177c24629bb1f7657.scope. Nov 23 04:48:41 localhost systemd[1]: Started libcrun container. Nov 23 04:48:41 localhost podman[287881]: 2025-11-23 09:48:41.323514226 +0000 UTC m=+0.045003636 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:48:41 localhost podman[287881]: 2025-11-23 09:48:41.424802163 +0000 UTC m=+0.146291523 container init 12be8f40282d4338f8027b59bc83363d30665b3e053b072177c24629bb1f7657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_allen, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, maintainer=Guillaume Abrioux , version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, release=553) Nov 23 04:48:41 localhost podman[287881]: 2025-11-23 09:48:41.43445353 +0000 UTC m=+0.155942890 container start 12be8f40282d4338f8027b59bc83363d30665b3e053b072177c24629bb1f7657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_allen, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, release=553, architecture=x86_64, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, distribution-scope=public, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , version=7, vcs-type=git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:48:41 localhost podman[287881]: 2025-11-23 09:48:41.434713559 +0000 UTC m=+0.156203109 container attach 12be8f40282d4338f8027b59bc83363d30665b3e053b072177c24629bb1f7657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_allen, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, name=rhceph, ceph=True, com.redhat.component=rhceph-container, architecture=x86_64) Nov 23 04:48:41 localhost condescending_allen[287896]: 167 167 Nov 23 04:48:41 localhost systemd[1]: libpod-12be8f40282d4338f8027b59bc83363d30665b3e053b072177c24629bb1f7657.scope: Deactivated successfully. Nov 23 04:48:41 localhost podman[287881]: 2025-11-23 09:48:41.436871094 +0000 UTC m=+0.158360484 container died 12be8f40282d4338f8027b59bc83363d30665b3e053b072177c24629bb1f7657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_allen, GIT_BRANCH=main, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, version=7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=553, maintainer=Guillaume Abrioux , name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=) Nov 23 04:48:41 localhost podman[287901]: 2025-11-23 09:48:41.534830769 +0000 UTC m=+0.085708238 container remove 12be8f40282d4338f8027b59bc83363d30665b3e053b072177c24629bb1f7657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_allen, GIT_BRANCH=main, release=553, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, name=rhceph, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, RELEASE=main, ceph=True, io.buildah.version=1.33.12) Nov 23 04:48:41 localhost systemd[1]: libpod-conmon-12be8f40282d4338f8027b59bc83363d30665b3e053b072177c24629bb1f7657.scope: Deactivated successfully. Nov 23 04:48:41 localhost podman[287917]: Nov 23 04:48:41 localhost podman[287917]: 2025-11-23 09:48:41.652193242 +0000 UTC m=+0.079664443 container create ab50d44df8af30cf71c1e6f9830941098a9707ca4e8df1828d4ea1044d556dbc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_nobel, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , version=7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_CLEAN=True, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 04:48:41 localhost systemd[1]: Started libpod-conmon-ab50d44df8af30cf71c1e6f9830941098a9707ca4e8df1828d4ea1044d556dbc.scope. Nov 23 04:48:41 localhost systemd[1]: Started libcrun container. Nov 23 04:48:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/008a9df6be262cf48bb982f9b0a6108efa8dca2f909b31256ec7d075796ef8b9/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Nov 23 04:48:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/008a9df6be262cf48bb982f9b0a6108efa8dca2f909b31256ec7d075796ef8b9/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Nov 23 04:48:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/008a9df6be262cf48bb982f9b0a6108efa8dca2f909b31256ec7d075796ef8b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 04:48:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/008a9df6be262cf48bb982f9b0a6108efa8dca2f909b31256ec7d075796ef8b9/merged/var/lib/ceph/mon/ceph-np0005532584 supports timestamps until 2038 (0x7fffffff) Nov 23 04:48:41 localhost podman[287917]: 2025-11-23 09:48:41.713662134 +0000 UTC m=+0.141133335 container init ab50d44df8af30cf71c1e6f9830941098a9707ca4e8df1828d4ea1044d556dbc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_nobel, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, name=rhceph, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=553, ceph=True, io.openshift.expose-services=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64) Nov 23 04:48:41 localhost podman[287917]: 2025-11-23 09:48:41.621243629 +0000 UTC m=+0.048714860 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:48:41 localhost podman[287917]: 2025-11-23 09:48:41.723830367 +0000 UTC m=+0.151301568 container start ab50d44df8af30cf71c1e6f9830941098a9707ca4e8df1828d4ea1044d556dbc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_nobel, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, version=7, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.expose-services=, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 04:48:41 localhost podman[287917]: 2025-11-23 09:48:41.724117596 +0000 UTC m=+0.151588837 container attach ab50d44df8af30cf71c1e6f9830941098a9707ca4e8df1828d4ea1044d556dbc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_nobel, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, com.redhat.component=rhceph-container, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_CLEAN=True, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55) Nov 23 04:48:41 localhost systemd[1]: libpod-ab50d44df8af30cf71c1e6f9830941098a9707ca4e8df1828d4ea1044d556dbc.scope: Deactivated successfully. Nov 23 04:48:41 localhost podman[287917]: 2025-11-23 09:48:41.82108541 +0000 UTC m=+0.248556621 container died ab50d44df8af30cf71c1e6f9830941098a9707ca4e8df1828d4ea1044d556dbc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_nobel, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, distribution-scope=public, version=7, vendor=Red Hat, Inc., release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True) Nov 23 04:48:41 localhost podman[287958]: 2025-11-23 09:48:41.911811563 +0000 UTC m=+0.076504586 container remove ab50d44df8af30cf71c1e6f9830941098a9707ca4e8df1828d4ea1044d556dbc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_nobel, release=553, GIT_BRANCH=main, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:48:41 localhost systemd[1]: libpod-conmon-ab50d44df8af30cf71c1e6f9830941098a9707ca4e8df1828d4ea1044d556dbc.scope: Deactivated successfully. Nov 23 04:48:41 localhost systemd[1]: Reloading. Nov 23 04:48:42 localhost systemd-rc-local-generator[287999]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:48:42 localhost systemd-sysv-generator[288003]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:42 localhost systemd[1]: var-lib-containers-storage-overlay-ad1709a1e30801e15ab0e56db8d66a169788ea8594f00364c4777ada7ebaf053-merged.mount: Deactivated successfully. Nov 23 04:48:42 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55cf28e18f20 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Nov 23 04:48:42 localhost systemd[1]: Reloading. Nov 23 04:48:42 localhost systemd-rc-local-generator[288038]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:48:42 localhost systemd-sysv-generator[288043]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:48:42 localhost systemd[1]: Starting Ceph mon.np0005532584 for 46550e70-79cb-5f55-bf6d-1204b97e083b... Nov 23 04:48:43 localhost podman[288099]: Nov 23 04:48:43 localhost podman[288099]: 2025-11-23 09:48:43.088741056 +0000 UTC m=+0.076277028 container create 60fd7b9d65da0c313a7bb835eee4a1d62693ac74467900ca0a7541c45edee99d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mon-np0005532584, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, release=553, ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, vcs-type=git, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 04:48:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2c871173b10c2d0e6d34c830a38baae6078d36a488e0f62a546c298a13dd6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 04:48:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2c871173b10c2d0e6d34c830a38baae6078d36a488e0f62a546c298a13dd6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 23 04:48:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2c871173b10c2d0e6d34c830a38baae6078d36a488e0f62a546c298a13dd6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 04:48:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2c871173b10c2d0e6d34c830a38baae6078d36a488e0f62a546c298a13dd6f/merged/var/lib/ceph/mon/ceph-np0005532584 supports timestamps until 2038 (0x7fffffff) Nov 23 04:48:43 localhost podman[288099]: 2025-11-23 09:48:43.14441635 +0000 UTC m=+0.131952312 container init 60fd7b9d65da0c313a7bb835eee4a1d62693ac74467900ca0a7541c45edee99d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mon-np0005532584, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, name=rhceph, description=Red Hat Ceph Storage 7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, ceph=True, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, distribution-scope=public, RELEASE=main, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12) Nov 23 04:48:43 localhost podman[288099]: 2025-11-23 09:48:43.152794828 +0000 UTC m=+0.140330790 container start 60fd7b9d65da0c313a7bb835eee4a1d62693ac74467900ca0a7541c45edee99d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mon-np0005532584, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, release=553, CEPH_POINT_RELEASE=, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container) Nov 23 04:48:43 localhost bash[288099]: 60fd7b9d65da0c313a7bb835eee4a1d62693ac74467900ca0a7541c45edee99d Nov 23 04:48:43 localhost podman[288099]: 2025-11-23 09:48:43.059790205 +0000 UTC m=+0.047326197 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:48:43 localhost systemd[1]: Started Ceph mon.np0005532584 for 46550e70-79cb-5f55-bf6d-1204b97e083b. Nov 23 04:48:43 localhost ceph-mon[288117]: set uid:gid to 167:167 (ceph:ceph) Nov 23 04:48:43 localhost ceph-mon[288117]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Nov 23 04:48:43 localhost ceph-mon[288117]: pidfile_write: ignore empty --pid-file Nov 23 04:48:43 localhost ceph-mon[288117]: load: jerasure load: lrc Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: RocksDB version: 7.9.2 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Git sha 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Compile date 2025-09-23 00:00:00 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: DB SUMMARY Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: DB Session ID: DTL4VO9AR2HPWFJ8W48F Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: CURRENT file: CURRENT Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: IDENTITY file: IDENTITY Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005532584/store.db dir, Total Num: 0, files: Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005532584/store.db: 000004.log size: 886 ; Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.error_if_exists: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.create_if_missing: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.paranoid_checks: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.flush_verify_memtable_count: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.env: 0x55aefadbf9e0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.fs: PosixFileSystem Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.info_log: 0x55aefca8ad20 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_file_opening_threads: 16 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.statistics: (nil) Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.use_fsync: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_log_file_size: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_manifest_file_size: 1073741824 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.log_file_time_to_roll: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.keep_log_file_num: 1000 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.recycle_log_file_num: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.allow_fallocate: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.allow_mmap_reads: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.allow_mmap_writes: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.use_direct_reads: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.create_missing_column_families: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.db_log_dir: Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.wal_dir: Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.table_cache_numshardbits: 6 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.WAL_ttl_seconds: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.WAL_size_limit_MB: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.manifest_preallocation_size: 4194304 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.is_fd_close_on_exec: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.advise_random_on_open: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.db_write_buffer_size: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.write_buffer_manager: 0x55aefca9b540 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.access_hint_on_compaction_start: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.random_access_max_buffer_size: 1048576 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.use_adaptive_mutex: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.rate_limiter: (nil) Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.wal_recovery_mode: 2 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.enable_thread_tracking: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.enable_pipelined_write: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.unordered_write: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.allow_concurrent_memtable_write: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.write_thread_max_yield_usec: 100 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.write_thread_slow_yield_usec: 3 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.row_cache: None Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.wal_filter: None Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.avoid_flush_during_recovery: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.allow_ingest_behind: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.two_write_queues: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.manual_wal_flush: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.wal_compression: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.atomic_flush: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.persist_stats_to_disk: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.write_dbid_to_manifest: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.log_readahead_size: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.file_checksum_gen_factory: Unknown Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.best_efforts_recovery: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.allow_data_in_errors: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.db_host_id: __hostname__ Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.enforce_single_del_contracts: true Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_background_jobs: 2 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_background_compactions: -1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_subcompactions: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.avoid_flush_during_shutdown: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.delayed_write_rate : 16777216 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_total_wal_size: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.stats_dump_period_sec: 600 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.stats_persist_period_sec: 600 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.stats_history_buffer_size: 1048576 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_open_files: -1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.bytes_per_sync: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.wal_bytes_per_sync: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.strict_bytes_per_sync: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compaction_readahead_size: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_background_flushes: -1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Compression algorithms supported: Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: #011kZSTD supported: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: #011kXpressCompression supported: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: #011kBZip2Compression supported: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: #011kLZ4Compression supported: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: #011kZlibCompression supported: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: #011kLZ4HCCompression supported: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: #011kSnappyCompression supported: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Fast CRC32 supported: Supported on x86 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: DMutex implementation: pthread_mutex_t Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005532584/store.db/MANIFEST-000005 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.merge_operator: Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compaction_filter: None Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compaction_filter_factory: None Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.sst_partitioner_factory: None Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aefca8a980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55aefca87350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.write_buffer_size: 33554432 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_write_buffer_number: 2 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compression: NoCompression Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.bottommost_compression: Disabled Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.prefix_extractor: nullptr Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.num_levels: 7 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compression_opts.level: 32767 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compression_opts.enabled: false Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_bytes_for_level_base: 268435456 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.arena_block_size: 1048576 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.table_properties_collectors: Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.inplace_update_support: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.bloom_locality: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.max_successive_merges: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.force_consistency_checks: 1 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.ttl: 2592000 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.enable_blob_files: false Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.min_blob_size: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.blob_file_size: 268435456 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005532584/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b789f4ce-9d2b-45a3-864d-0b3de17929ef Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891323211335, "job": 1, "event": "recovery_started", "wal_files": [4]} Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891323213874, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 2012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 898, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 776, "raw_average_value_size": 155, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891323, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b789f4ce-9d2b-45a3-864d-0b3de17929ef", "db_session_id": "DTL4VO9AR2HPWFJ8W48F", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891323214086, "job": 1, "event": "recovery_finished"} Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55aefcaaee00 Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: DB pointer 0x55aefcba4000 Nov 23 04:48:43 localhost ceph-mon[288117]: mon.np0005532584 does not exist in monmap, will attempt to join an existing cluster Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 04:48:43 localhost ceph-mon[288117]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.96 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Sum 1/0 1.96 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55aefca87350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Nov 23 04:48:43 localhost ceph-mon[288117]: using public_addr v2:172.18.0.106:0/0 -> [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] Nov 23 04:48:43 localhost ceph-mon[288117]: starting mon.np0005532584 rank -1 at public addrs [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] at bind addrs [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005532584 fsid 46550e70-79cb-5f55-bf6d-1204b97e083b Nov 23 04:48:43 localhost ceph-mon[288117]: mon.np0005532584@-1(???) e0 preinit fsid 46550e70-79cb-5f55-bf6d-1204b97e083b Nov 23 04:48:43 localhost ceph-mon[288117]: mon.np0005532584@-1(synchronizing) e5 sync_obtain_latest_monmap Nov 23 04:48:43 localhost ceph-mon[288117]: mon.np0005532584@-1(synchronizing) e5 sync_obtain_latest_monmap obtained monmap e5 Nov 23 04:48:43 localhost ceph-mon[288117]: mon.np0005532584@-1(synchronizing).mds e16 new map Nov 23 04:48:43 localhost ceph-mon[288117]: mon.np0005532584@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T08:00:26.486221+0000#012modified#0112025-11-23T09:47:19.846415+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01179#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26392}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26392 members: 26392#012[mds.mds.np0005532586.mfohsb{0:26392} state up:active seq 12 addr [v2:172.18.0.108:6808/2718449296,v1:172.18.0.108:6809/2718449296] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005532585.jcltnl{-1:17133} state up:standby seq 1 addr [v2:172.18.0.107:6808/563301557,v1:172.18.0.107:6809/563301557] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005532584.aoxjmw{-1:17139} state up:standby seq 1 addr [v2:172.18.0.106:6808/2261302276,v1:172.18.0.106:6809/2261302276] compat {c=[1],r=[1],i=[17ff]}] Nov 23 04:48:43 localhost ceph-mon[288117]: mon.np0005532584@-1(synchronizing).osd e81 crush map has features 3314933000852226048, adjusting msgr requires Nov 23 04:48:43 localhost ceph-mon[288117]: mon.np0005532584@-1(synchronizing).osd e81 crush map has features 288514051259236352, adjusting msgr requires Nov 23 04:48:43 localhost ceph-mon[288117]: mon.np0005532584@-1(synchronizing).osd e81 crush map has features 288514051259236352, adjusting msgr requires Nov 23 04:48:43 localhost ceph-mon[288117]: mon.np0005532584@-1(synchronizing).osd e81 crush map has features 288514051259236352, adjusting msgr requires Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth rm", "entity": "mds.mds.np0005532583.nwcrcp"} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd='[{"prefix": "auth rm", "entity": "mds.mds.np0005532583.nwcrcp"}]': finished Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Removing key for mds.mds.np0005532583.nwcrcp Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Added label mgr to host np0005532584.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Added label mgr to host np0005532585.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Added label mgr to host np0005532586.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Saving service mgr spec with placement label:mgr Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Nov 23 04:48:43 localhost ceph-mon[288117]: Deploying daemon mgr.np0005532584.naxwxy on np0005532584.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Nov 23 04:48:43 localhost ceph-mon[288117]: Deploying daemon mgr.np0005532585.gzafiw on np0005532585.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Added label mon to host np0005532581.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Nov 23 04:48:43 localhost ceph-mon[288117]: Added label _admin to host np0005532581.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: Deploying daemon mgr.np0005532586.thmvqb on np0005532586.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Added label mon to host np0005532582.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Added label _admin to host np0005532582.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Added label mon to host np0005532583.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Added label _admin to host np0005532583.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Added label mon to host np0005532584.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: Added label _admin to host np0005532584.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:48:43 localhost ceph-mon[288117]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Added label mon to host np0005532585.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:48:43 localhost ceph-mon[288117]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Added label _admin to host np0005532585.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:48:43 localhost ceph-mon[288117]: Added label mon to host np0005532586.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Added label _admin to host np0005532586.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:48:43 localhost ceph-mon[288117]: Saving service mon spec with placement label:mon Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:48:43 localhost ceph-mon[288117]: Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:48:43 localhost ceph-mon[288117]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: Deploying daemon mon.np0005532586 on np0005532586.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: Deploying daemon mon.np0005532585 on np0005532585.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: mon.np0005532581 calling monitor election Nov 23 04:48:43 localhost ceph-mon[288117]: mon.np0005532583 calling monitor election Nov 23 04:48:43 localhost ceph-mon[288117]: mon.np0005532582 calling monitor election Nov 23 04:48:43 localhost ceph-mon[288117]: mon.np0005532586 calling monitor election Nov 23 04:48:43 localhost ceph-mon[288117]: mon.np0005532581 is new leader, mons np0005532581,np0005532583,np0005532582,np0005532586 in quorum (ranks 0,1,2,3) Nov 23 04:48:43 localhost ceph-mon[288117]: overall HEALTH_OK Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:43 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:48:43 localhost ceph-mon[288117]: Deploying daemon mon.np0005532584 on np0005532584.localdomain Nov 23 04:48:43 localhost ceph-mon[288117]: mon.np0005532584@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3 Nov 23 04:48:47 localhost podman[239764]: time="2025-11-23T09:48:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:48:47 localhost podman[239764]: @ - - [23/Nov/2025:09:48:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:48:47 localhost podman[239764]: @ - - [23/Nov/2025:09:48:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18213 "" "Go-http-client/1.1" Nov 23 04:48:47 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55cf28e19600 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Nov 23 04:48:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:48:47 localhost podman[288156]: 2025-11-23 09:48:47.891704072 +0000 UTC m=+0.075369849 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, config_id=edpm, managed_by=edpm_ansible, vcs-type=git, version=9.6, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=) Nov 23 04:48:47 localhost podman[288156]: 2025-11-23 09:48:47.903611589 +0000 UTC m=+0.087277376 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.33.7, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Nov 23 04:48:47 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:48:49 localhost ceph-mon[288117]: mon.np0005532584@-1(probing) e6 my rank is now 5 (was -1) Nov 23 04:48:49 localhost ceph-mon[288117]: log_channel(cluster) log [INF] : mon.np0005532584 calling monitor election Nov 23 04:48:49 localhost ceph-mon[288117]: paxos.5).electionLogic(0) init, first boot, initializing epoch at 1 Nov 23 04:48:49 localhost ceph-mon[288117]: mon.np0005532584@5(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:48:51 localhost ceph-mds[285431]: mds.beacon.mds.np0005532584.aoxjmw missed beacon ack from the monitors Nov 23 04:48:51 localhost podman[288178]: 2025-11-23 09:48:51.893803209 +0000 UTC m=+0.076656270 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:48:51 localhost podman[288178]: 2025-11-23 09:48:51.930629152 +0000 UTC m=+0.113482193 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:48:51 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:48:51 localhost systemd[1]: tmp-crun.SyvTp9.mount: Deactivated successfully. Nov 23 04:48:51 localhost podman[288177]: 2025-11-23 09:48:51.961150882 +0000 UTC m=+0.146752777 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm) Nov 23 04:48:52 localhost podman[288177]: 2025-11-23 09:48:52.000499423 +0000 UTC m=+0.186101308 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Nov 23 04:48:52 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532584@5(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532584@5(peon) e6 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532584@5(peon) e6 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532581 calling monitor election Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532582 calling monitor election Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532583 calling monitor election Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532586 calling monitor election Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532585 calling monitor election Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532581 is new leader, mons np0005532581,np0005532583,np0005532582,np0005532586,np0005532585 in quorum (ranks 0,1,2,3,4) Nov 23 04:48:52 localhost ceph-mon[288117]: overall HEALTH_OK Nov 23 04:48:52 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:52 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532584@5(peon) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:48:52 localhost ceph-mon[288117]: mgrc update_daemon_metadata mon.np0005532584 metadata {addrs=[v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005532584.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.6 (Plow),distro_version=9.6,hostname=np0005532584.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532582 calling monitor election Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532581 calling monitor election Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532583 calling monitor election Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532586 calling monitor election Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532585 calling monitor election Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532584 calling monitor election Nov 23 04:48:52 localhost ceph-mon[288117]: mon.np0005532581 is new leader, mons np0005532581,np0005532583,np0005532582,np0005532586,np0005532585,np0005532584 in quorum (ranks 0,1,2,3,4,5) Nov 23 04:48:52 localhost ceph-mon[288117]: overall HEALTH_OK Nov 23 04:48:52 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:53 localhost ceph-mon[288117]: mon.np0005532584@5(peon) e6 handle_auth_request failed to assign global_id Nov 23 04:48:54 localhost podman[288345]: 2025-11-23 09:48:54.077002564 +0000 UTC m=+0.097910305 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-type=git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.k8s.description=Red Hat Ceph Storage 7, release=553, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7) Nov 23 04:48:54 localhost podman[288345]: 2025-11-23 09:48:54.207560992 +0000 UTC m=+0.228468673 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, vendor=Red Hat, Inc., name=rhceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2025-09-24T08:57:55, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:48:55 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:55 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:55 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:55 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:55 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:55 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:55 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:48:56 localhost ceph-mon[288117]: Updating np0005532581.localdomain:/etc/ceph/ceph.conf Nov 23 04:48:56 localhost ceph-mon[288117]: Updating np0005532582.localdomain:/etc/ceph/ceph.conf Nov 23 04:48:56 localhost ceph-mon[288117]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf Nov 23 04:48:56 localhost ceph-mon[288117]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:48:56 localhost ceph-mon[288117]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:48:56 localhost ceph-mon[288117]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:48:57 localhost ceph-mon[288117]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:48:57 localhost ceph-mon[288117]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:48:57 localhost ceph-mon[288117]: Updating np0005532581.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:48:57 localhost ceph-mon[288117]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:48:57 localhost ceph-mon[288117]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:48:57 localhost ceph-mon[288117]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:48:57 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:57 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:57 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:57 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:57 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:57 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:57 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:57 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:57 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:57 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:57 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:57 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:57 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:57 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:57 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:48:58 localhost ceph-mon[288117]: Reconfiguring mon.np0005532581 (monmap changed)... Nov 23 04:48:58 localhost ceph-mon[288117]: Reconfiguring daemon mon.np0005532581 on np0005532581.localdomain Nov 23 04:48:58 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:58 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:58 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532581.sxlgsx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:48:59 localhost nova_compute[280939]: 2025-11-23 09:48:59.618 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:48:59 localhost nova_compute[280939]: 2025-11-23 09:48:59.642 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:48:59 localhost ceph-mon[288117]: Reconfiguring mgr.np0005532581.sxlgsx (monmap changed)... Nov 23 04:48:59 localhost ceph-mon[288117]: Reconfiguring daemon mgr.np0005532581.sxlgsx on np0005532581.localdomain Nov 23 04:48:59 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:59 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:48:59 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532581.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:49:00 localhost nova_compute[280939]: 2025-11-23 09:49:00.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:49:00 localhost ceph-mon[288117]: mon.np0005532584@5(peon) e6 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 04:49:00 localhost ceph-mon[288117]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3125926817' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 04:49:00 localhost ceph-mon[288117]: mon.np0005532584@5(peon) e6 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 04:49:00 localhost ceph-mon[288117]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3125926817' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 04:49:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:49:00 localhost systemd[1]: tmp-crun.FmPJ3N.mount: Deactivated successfully. Nov 23 04:49:00 localhost podman[288875]: 2025-11-23 09:49:00.907229767 +0000 UTC m=+0.091485267 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 04:49:00 localhost podman[288875]: 2025-11-23 09:49:00.941557153 +0000 UTC m=+0.125812633 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:49:00 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:49:00 localhost ceph-mon[288117]: Reconfiguring crash.np0005532581 (monmap changed)... Nov 23 04:49:00 localhost ceph-mon[288117]: Reconfiguring daemon crash.np0005532581 on np0005532581.localdomain Nov 23 04:49:00 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:49:00 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:49:00 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532582.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:49:01 localhost nova_compute[280939]: 2025-11-23 09:49:01.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:49:01 localhost nova_compute[280939]: 2025-11-23 09:49:01.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:49:01 localhost ceph-mon[288117]: Reconfiguring crash.np0005532582 (monmap changed)... Nov 23 04:49:01 localhost ceph-mon[288117]: Reconfiguring daemon crash.np0005532582 on np0005532582.localdomain Nov 23 04:49:01 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:49:01 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:49:01 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.150 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.151 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.151 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.152 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.173 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.174 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.174 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.174 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.175 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.642 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.848 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.850 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12370MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.850 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.851 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.938 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.938 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:49:02 localhost nova_compute[280939]: 2025-11-23 09:49:02.964 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:49:02 localhost ceph-mon[288117]: Reconfiguring mon.np0005532582 (monmap changed)... Nov 23 04:49:02 localhost ceph-mon[288117]: Reconfiguring daemon mon.np0005532582 on np0005532582.localdomain Nov 23 04:49:02 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:49:02 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:49:02 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532582.gilwrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:49:03 localhost ceph-mon[288117]: mon.np0005532584@5(peon) e6 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:49:03 localhost ceph-mon[288117]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1331658662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:49:03 localhost nova_compute[280939]: 2025-11-23 09:49:03.369 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:49:03 localhost nova_compute[280939]: 2025-11-23 09:49:03.375 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:49:03 localhost nova_compute[280939]: 2025-11-23 09:49:03.393 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:49:03 localhost nova_compute[280939]: 2025-11-23 09:49:03.395 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:49:03 localhost nova_compute[280939]: 2025-11-23 09:49:03.396 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:49:04 localhost ceph-mon[288117]: Reconfiguring mgr.np0005532582.gilwrz (monmap changed)... Nov 23 04:49:04 localhost ceph-mon[288117]: Reconfiguring daemon mgr.np0005532582.gilwrz on np0005532582.localdomain Nov 23 04:49:04 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:49:04 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:49:04 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:49:04 localhost nova_compute[280939]: 2025-11-23 09:49:04.392 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:49:04 localhost nova_compute[280939]: 2025-11-23 09:49:04.393 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:49:05 localhost ceph-mon[288117]: Reconfiguring mon.np0005532583 (monmap changed)... Nov 23 04:49:05 localhost ceph-mon[288117]: Reconfiguring daemon mon.np0005532583 on np0005532583.localdomain Nov 23 04:49:05 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:49:05 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:49:05 localhost ceph-mon[288117]: Reconfiguring mgr.np0005532583.orhywt (monmap changed)... Nov 23 04:49:05 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532583.orhywt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:49:05 localhost ceph-mon[288117]: Reconfiguring daemon mgr.np0005532583.orhywt on np0005532583.localdomain Nov 23 04:49:05 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:49:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:49:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:49:05 localhost podman[288938]: 2025-11-23 09:49:05.924155268 +0000 UTC m=+0.088646469 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS) Nov 23 04:49:05 localhost systemd[1]: tmp-crun.KsamKr.mount: Deactivated successfully. Nov 23 04:49:05 localhost podman[288937]: 2025-11-23 09:49:05.981852254 +0000 UTC m=+0.148796910 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:49:05 localhost podman[288937]: 2025-11-23 09:49:05.99536382 +0000 UTC m=+0.162308386 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible) Nov 23 04:49:06 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:49:06 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:49:06 localhost ceph-mon[288117]: Reconfiguring crash.np0005532583 (monmap changed)... Nov 23 04:49:06 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:49:06 localhost ceph-mon[288117]: Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain Nov 23 04:49:06 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:49:06 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:49:06 localhost ceph-mon[288117]: Reconfiguring crash.np0005532584 (monmap changed)... Nov 23 04:49:06 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:49:06 localhost ceph-mon[288117]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain Nov 23 04:49:06 localhost podman[288938]: 2025-11-23 09:49:06.050893919 +0000 UTC m=+0.215385161 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:49:06 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:49:06 localhost podman[289033]: Nov 23 04:49:06 localhost podman[289033]: 2025-11-23 09:49:06.475677853 +0000 UTC m=+0.080456877 container create 702cd5bd021b55d730ad1850d8dc168a563f2699f27da060f7ba6bc08df1977f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_archimedes, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, RELEASE=main, name=rhceph, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, build-date=2025-09-24T08:57:55, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , release=553) Nov 23 04:49:06 localhost systemd[1]: Started libpod-conmon-702cd5bd021b55d730ad1850d8dc168a563f2699f27da060f7ba6bc08df1977f.scope. Nov 23 04:49:06 localhost systemd[1]: Started libcrun container. Nov 23 04:49:06 localhost podman[289033]: 2025-11-23 09:49:06.442393248 +0000 UTC m=+0.047172302 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:49:06 localhost podman[289033]: 2025-11-23 09:49:06.54803491 +0000 UTC m=+0.152813944 container init 702cd5bd021b55d730ad1850d8dc168a563f2699f27da060f7ba6bc08df1977f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_archimedes, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, ceph=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, description=Red Hat Ceph Storage 7) Nov 23 04:49:06 localhost podman[289033]: 2025-11-23 09:49:06.558929736 +0000 UTC m=+0.163708750 container start 702cd5bd021b55d730ad1850d8dc168a563f2699f27da060f7ba6bc08df1977f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_archimedes, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, com.redhat.component=rhceph-container, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, name=rhceph, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:49:06 localhost podman[289033]: 2025-11-23 09:49:06.559217734 +0000 UTC m=+0.163996818 container attach 702cd5bd021b55d730ad1850d8dc168a563f2699f27da060f7ba6bc08df1977f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_archimedes, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, version=7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , release=553, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc.) Nov 23 04:49:06 localhost inspiring_archimedes[289048]: 167 167 Nov 23 04:49:06 localhost systemd[1]: libpod-702cd5bd021b55d730ad1850d8dc168a563f2699f27da060f7ba6bc08df1977f.scope: Deactivated successfully. Nov 23 04:49:06 localhost podman[289033]: 2025-11-23 09:49:06.563506677 +0000 UTC m=+0.168285701 container died 702cd5bd021b55d730ad1850d8dc168a563f2699f27da060f7ba6bc08df1977f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_archimedes, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, name=rhceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64) Nov 23 04:49:06 localhost podman[289053]: 2025-11-23 09:49:06.65915165 +0000 UTC m=+0.086912656 container remove 702cd5bd021b55d730ad1850d8dc168a563f2699f27da060f7ba6bc08df1977f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_archimedes, vcs-type=git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , version=7, GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, name=rhceph, description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:49:06 localhost systemd[1]: libpod-conmon-702cd5bd021b55d730ad1850d8dc168a563f2699f27da060f7ba6bc08df1977f.scope: Deactivated successfully. Nov 23 04:49:06 localhost openstack_network_exporter[241732]: ERROR 09:49:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:49:06 localhost openstack_network_exporter[241732]: ERROR 09:49:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:49:06 localhost openstack_network_exporter[241732]: ERROR 09:49:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:49:06 localhost openstack_network_exporter[241732]: ERROR 09:49:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:49:06 localhost openstack_network_exporter[241732]: Nov 23 04:49:06 localhost openstack_network_exporter[241732]: ERROR 09:49:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:49:06 localhost openstack_network_exporter[241732]: Nov 23 04:49:06 localhost systemd[1]: var-lib-containers-storage-overlay-397b0cc9d3151dba2b883024dd6cd5394fac312158acc2392fa2d341bddc7f1d-merged.mount: Deactivated successfully. Nov 23 04:49:07 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:49:07 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' Nov 23 04:49:07 localhost ceph-mon[288117]: Reconfiguring osd.2 (monmap changed)... Nov 23 04:49:07 localhost ceph-mon[288117]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 23 04:49:07 localhost ceph-mon[288117]: Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:49:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:49:07 localhost podman[289128]: Nov 23 04:49:07 localhost systemd[1]: tmp-crun.bdXwit.mount: Deactivated successfully. Nov 23 04:49:07 localhost podman[289120]: 2025-11-23 09:49:07.388960432 +0000 UTC m=+0.099810612 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:49:07 localhost podman[289128]: 2025-11-23 09:49:07.392316596 +0000 UTC m=+0.080376685 container create 28693f39ace4a8b52d98f7e8d17213ceb1e3e64bfcaa7f4c42cbe17d2f0d2c27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_feynman, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, ceph=True, vcs-type=git, distribution-scope=public, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , release=553, version=7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Nov 23 04:49:07 localhost systemd[1]: Started libpod-conmon-28693f39ace4a8b52d98f7e8d17213ceb1e3e64bfcaa7f4c42cbe17d2f0d2c27.scope. Nov 23 04:49:07 localhost systemd[1]: Started libcrun container. Nov 23 04:49:07 localhost podman[289128]: 2025-11-23 09:49:07.459366909 +0000 UTC m=+0.147426998 container init 28693f39ace4a8b52d98f7e8d17213ceb1e3e64bfcaa7f4c42cbe17d2f0d2c27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_feynman, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_BRANCH=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=553, RELEASE=main) Nov 23 04:49:07 localhost podman[289128]: 2025-11-23 09:49:07.359937629 +0000 UTC m=+0.047997778 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:49:07 localhost podman[289128]: 2025-11-23 09:49:07.468672726 +0000 UTC m=+0.156732825 container start 28693f39ace4a8b52d98f7e8d17213ceb1e3e64bfcaa7f4c42cbe17d2f0d2c27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_feynman, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, release=553, version=7, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.expose-services=) Nov 23 04:49:07 localhost podman[289128]: 2025-11-23 09:49:07.468945984 +0000 UTC m=+0.157006133 container attach 28693f39ace4a8b52d98f7e8d17213ceb1e3e64bfcaa7f4c42cbe17d2f0d2c27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_feynman, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=553, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:49:07 localhost silly_feynman[289159]: 167 167 Nov 23 04:49:07 localhost systemd[1]: libpod-28693f39ace4a8b52d98f7e8d17213ceb1e3e64bfcaa7f4c42cbe17d2f0d2c27.scope: Deactivated successfully. Nov 23 04:49:07 localhost podman[289128]: 2025-11-23 09:49:07.47272299 +0000 UTC m=+0.160783119 container died 28693f39ace4a8b52d98f7e8d17213ceb1e3e64bfcaa7f4c42cbe17d2f0d2c27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_feynman, distribution-scope=public, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, release=553, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, CEPH_POINT_RELEASE=, version=7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:49:07 localhost podman[289120]: 2025-11-23 09:49:07.519062607 +0000 UTC m=+0.229912837 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:49:07 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:49:07 localhost podman[289164]: 2025-11-23 09:49:07.580928301 +0000 UTC m=+0.091442766 container remove 28693f39ace4a8b52d98f7e8d17213ceb1e3e64bfcaa7f4c42cbe17d2f0d2c27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_feynman, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, RELEASE=main, ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.openshift.expose-services=) Nov 23 04:49:07 localhost systemd[1]: libpod-conmon-28693f39ace4a8b52d98f7e8d17213ceb1e3e64bfcaa7f4c42cbe17d2f0d2c27.scope: Deactivated successfully. Nov 23 04:49:07 localhost ceph-mon[288117]: mon.np0005532584@5(peon).osd e81 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Nov 23 04:49:07 localhost ceph-mon[288117]: mon.np0005532584@5(peon).osd e81 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Nov 23 04:49:07 localhost ceph-mon[288117]: mon.np0005532584@5(peon).osd e82 e82: 6 total, 6 up, 6 in Nov 23 04:49:07 localhost systemd[1]: session-23.scope: Deactivated successfully. Nov 23 04:49:07 localhost systemd[1]: session-26.scope: Deactivated successfully. Nov 23 04:49:07 localhost systemd[1]: session-26.scope: Consumed 3min 36.435s CPU time. Nov 23 04:49:07 localhost systemd[1]: session-24.scope: Deactivated successfully. Nov 23 04:49:07 localhost systemd[1]: session-25.scope: Deactivated successfully. Nov 23 04:49:07 localhost systemd[1]: session-17.scope: Deactivated successfully. Nov 23 04:49:07 localhost systemd[1]: session-16.scope: Deactivated successfully. Nov 23 04:49:07 localhost systemd-logind[760]: Session 16 logged out. Waiting for processes to exit. Nov 23 04:49:07 localhost systemd-logind[760]: Session 17 logged out. Waiting for processes to exit. Nov 23 04:49:07 localhost systemd-logind[760]: Session 24 logged out. Waiting for processes to exit. Nov 23 04:49:07 localhost systemd-logind[760]: Session 23 logged out. Waiting for processes to exit. Nov 23 04:49:07 localhost systemd-logind[760]: Session 25 logged out. Waiting for processes to exit. Nov 23 04:49:07 localhost systemd-logind[760]: Session 26 logged out. Waiting for processes to exit. Nov 23 04:49:07 localhost systemd[1]: session-14.scope: Deactivated successfully. Nov 23 04:49:07 localhost systemd-logind[760]: Removed session 23. Nov 23 04:49:07 localhost systemd[1]: session-18.scope: Deactivated successfully. Nov 23 04:49:07 localhost systemd[1]: session-20.scope: Deactivated successfully. Nov 23 04:49:07 localhost systemd[1]: session-19.scope: Deactivated successfully. Nov 23 04:49:07 localhost systemd-logind[760]: Session 14 logged out. Waiting for processes to exit. Nov 23 04:49:07 localhost systemd[1]: session-21.scope: Deactivated successfully. Nov 23 04:49:07 localhost systemd[1]: session-22.scope: Deactivated successfully. Nov 23 04:49:07 localhost systemd-logind[760]: Session 19 logged out. Waiting for processes to exit. Nov 23 04:49:07 localhost systemd-logind[760]: Session 18 logged out. Waiting for processes to exit. Nov 23 04:49:07 localhost systemd-logind[760]: Session 20 logged out. Waiting for processes to exit. Nov 23 04:49:07 localhost systemd-logind[760]: Session 22 logged out. Waiting for processes to exit. Nov 23 04:49:07 localhost systemd-logind[760]: Session 21 logged out. Waiting for processes to exit. Nov 23 04:49:07 localhost systemd-logind[760]: Removed session 26. Nov 23 04:49:07 localhost systemd-logind[760]: Removed session 24. Nov 23 04:49:07 localhost systemd-logind[760]: Removed session 25. Nov 23 04:49:07 localhost systemd-logind[760]: Removed session 17. Nov 23 04:49:07 localhost systemd-logind[760]: Removed session 16. Nov 23 04:49:07 localhost systemd-logind[760]: Removed session 14. Nov 23 04:49:07 localhost systemd-logind[760]: Removed session 18. Nov 23 04:49:07 localhost systemd-logind[760]: Removed session 20. Nov 23 04:49:07 localhost systemd-logind[760]: Removed session 19. Nov 23 04:49:07 localhost systemd-logind[760]: Removed session 21. Nov 23 04:49:07 localhost systemd-logind[760]: Removed session 22. Nov 23 04:49:07 localhost systemd[1]: var-lib-containers-storage-overlay-84b495ad01b058f2711ad1a9b50c7dc39652883e269cf53cda3d20f6f7685ab9-merged.mount: Deactivated successfully. Nov 23 04:49:08 localhost sshd[289186]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:49:08 localhost ceph-mon[288117]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 23 04:49:08 localhost ceph-mon[288117]: Activating manager daemon np0005532583.orhywt Nov 23 04:49:08 localhost ceph-mon[288117]: from='client.? 172.18.0.103:0/443540260' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 23 04:49:08 localhost ceph-mon[288117]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Nov 23 04:49:08 localhost ceph-mon[288117]: Manager daemon np0005532583.orhywt is now available Nov 23 04:49:08 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532583.orhywt/mirror_snapshot_schedule"} : dispatch Nov 23 04:49:08 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532583.orhywt/mirror_snapshot_schedule"} : dispatch Nov 23 04:49:08 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532583.orhywt/trash_purge_schedule"} : dispatch Nov 23 04:49:08 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532583.orhywt/trash_purge_schedule"} : dispatch Nov 23 04:49:08 localhost systemd-logind[760]: New session 64 of user ceph-admin. Nov 23 04:49:08 localhost systemd[1]: Started Session 64 of User ceph-admin. Nov 23 04:49:08 localhost ceph-mon[288117]: mon.np0005532584@5(peon).osd e82 _set_new_cache_sizes cache_size:1019653913 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:49:09 localhost podman[289298]: 2025-11-23 09:49:09.197597059 +0000 UTC m=+0.091804617 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, distribution-scope=public, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, ceph=True) Nov 23 04:49:09 localhost podman[289298]: 2025-11-23 09:49:09.330452817 +0000 UTC m=+0.224660355 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, RELEASE=main, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, release=553, name=rhceph, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, version=7) Nov 23 04:49:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:49:09.730 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:49:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:49:09.731 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:49:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:49:09.731 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:49:09 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:09 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:10 localhost ceph-mon[288117]: [23/Nov/2025:09:49:09] ENGINE Bus STARTING Nov 23 04:49:10 localhost ceph-mon[288117]: [23/Nov/2025:09:49:09] ENGINE Serving on http://172.18.0.105:8765 Nov 23 04:49:10 localhost ceph-mon[288117]: [23/Nov/2025:09:49:09] ENGINE Serving on https://172.18.0.105:7150 Nov 23 04:49:10 localhost ceph-mon[288117]: [23/Nov/2025:09:49:09] ENGINE Bus STARTED Nov 23 04:49:10 localhost ceph-mon[288117]: [23/Nov/2025:09:49:09] ENGINE Client ('172.18.0.105', 46326) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 23 04:49:10 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:10 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:10 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:10 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:10 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:10 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:10 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:10 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:10 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:10 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd/host:np0005532583", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd/host:np0005532583", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd/host:np0005532581", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd/host:np0005532581", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: Adjusting osd_memory_target on np0005532584.localdomain to 836.6M Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:12 localhost ceph-mon[288117]: Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:12 localhost ceph-mon[288117]: Adjusting osd_memory_target on np0005532585.localdomain to 836.6M Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:12 localhost ceph-mon[288117]: Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd/host:np0005532582", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: Adjusting osd_memory_target on np0005532586.localdomain to 836.6M Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd/host:np0005532582", "name": "osd_memory_target"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Nov 23 04:49:12 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:49:12 localhost ceph-mon[288117]: Updating np0005532581.localdomain:/etc/ceph/ceph.conf Nov 23 04:49:12 localhost ceph-mon[288117]: Updating np0005532582.localdomain:/etc/ceph/ceph.conf Nov 23 04:49:12 localhost ceph-mon[288117]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf Nov 23 04:49:12 localhost ceph-mon[288117]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:49:12 localhost ceph-mon[288117]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:49:12 localhost ceph-mon[288117]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:49:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:49:13 localhost ceph-mon[288117]: mon.np0005532584@5(peon).osd e82 _set_new_cache_sizes cache_size:1020045605 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:49:13 localhost ceph-mon[288117]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:49:13 localhost ceph-mon[288117]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:49:13 localhost ceph-mon[288117]: Updating np0005532581.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:49:13 localhost ceph-mon[288117]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:49:13 localhost ceph-mon[288117]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:49:13 localhost ceph-mon[288117]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:49:13 localhost ceph-mon[288117]: Updating np0005532581.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:49:13 localhost ceph-mon[288117]: Updating np0005532583.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:49:13 localhost ceph-mon[288117]: Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:49:14 localhost ceph-mon[288117]: Updating np0005532582.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:49:14 localhost ceph-mon[288117]: Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:49:14 localhost ceph-mon[288117]: Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:49:14 localhost ceph-mon[288117]: Updating np0005532581.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:49:14 localhost ceph-mon[288117]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:49:14 localhost ceph-mon[288117]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:49:14 localhost ceph-mon[288117]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:49:14 localhost ceph-mon[288117]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:49:14 localhost ceph-mon[288117]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:49:14 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:14 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:14 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:14 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:14 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:14 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:14 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:14 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:14 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:14 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:14 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:14 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:14 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:15 localhost podman[290258]: Nov 23 04:49:15 localhost podman[290258]: 2025-11-23 09:49:15.482573629 +0000 UTC m=+0.072475821 container create 33d2a202efe94f0afcdf676a9fc0ceb690a1888a9048dfbd31e7da5014e2d992 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_jang, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, release=553, CEPH_POINT_RELEASE=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, RELEASE=main, ceph=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, distribution-scope=public) Nov 23 04:49:15 localhost systemd[1]: Started libpod-conmon-33d2a202efe94f0afcdf676a9fc0ceb690a1888a9048dfbd31e7da5014e2d992.scope. Nov 23 04:49:15 localhost systemd[1]: Started libcrun container. Nov 23 04:49:15 localhost podman[290258]: 2025-11-23 09:49:15.545383643 +0000 UTC m=+0.135285825 container init 33d2a202efe94f0afcdf676a9fc0ceb690a1888a9048dfbd31e7da5014e2d992 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_jang, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-type=git, io.openshift.expose-services=, name=rhceph, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:49:15 localhost podman[290258]: 2025-11-23 09:49:15.452007369 +0000 UTC m=+0.041909581 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:49:15 localhost podman[290258]: 2025-11-23 09:49:15.556464434 +0000 UTC m=+0.146366616 container start 33d2a202efe94f0afcdf676a9fc0ceb690a1888a9048dfbd31e7da5014e2d992 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_jang, description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=) Nov 23 04:49:15 localhost cool_jang[290273]: 167 167 Nov 23 04:49:15 localhost podman[290258]: 2025-11-23 09:49:15.557042591 +0000 UTC m=+0.146944833 container attach 33d2a202efe94f0afcdf676a9fc0ceb690a1888a9048dfbd31e7da5014e2d992 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_jang, architecture=x86_64, ceph=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, release=553, name=rhceph, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux ) Nov 23 04:49:15 localhost systemd[1]: libpod-33d2a202efe94f0afcdf676a9fc0ceb690a1888a9048dfbd31e7da5014e2d992.scope: Deactivated successfully. Nov 23 04:49:15 localhost podman[290258]: 2025-11-23 09:49:15.559467946 +0000 UTC m=+0.149370198 container died 33d2a202efe94f0afcdf676a9fc0ceb690a1888a9048dfbd31e7da5014e2d992 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_jang, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, name=rhceph, version=7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:49:15 localhost ceph-mon[288117]: Reconfiguring osd.2 (monmap changed)... Nov 23 04:49:15 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 23 04:49:15 localhost ceph-mon[288117]: Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:49:15 localhost podman[290278]: 2025-11-23 09:49:15.650222009 +0000 UTC m=+0.082745337 container remove 33d2a202efe94f0afcdf676a9fc0ceb690a1888a9048dfbd31e7da5014e2d992 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_jang, CEPH_POINT_RELEASE=, ceph=True, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, distribution-scope=public, release=553, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.expose-services=) Nov 23 04:49:15 localhost systemd[1]: libpod-conmon-33d2a202efe94f0afcdf676a9fc0ceb690a1888a9048dfbd31e7da5014e2d992.scope: Deactivated successfully. Nov 23 04:49:16 localhost systemd[1]: var-lib-containers-storage-overlay-45c012a8b1ad47875a729eb0be8c321d69fb0076586024880bcd4effff28d439-merged.mount: Deactivated successfully. Nov 23 04:49:16 localhost podman[290356]: Nov 23 04:49:16 localhost podman[290356]: 2025-11-23 09:49:16.510831988 +0000 UTC m=+0.055283183 container create db86cbc44c4b3c1908bd90970fca7ef61919e022f49cea176ba7a05c8a16f327 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_allen, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, release=553, io.openshift.tags=rhceph ceph, RELEASE=main, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, version=7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-type=git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , ceph=True, GIT_BRANCH=main) Nov 23 04:49:16 localhost systemd[1]: Started libpod-conmon-db86cbc44c4b3c1908bd90970fca7ef61919e022f49cea176ba7a05c8a16f327.scope. Nov 23 04:49:16 localhost systemd[1]: Started libcrun container. Nov 23 04:49:16 localhost podman[290356]: 2025-11-23 09:49:16.575036013 +0000 UTC m=+0.119487208 container init db86cbc44c4b3c1908bd90970fca7ef61919e022f49cea176ba7a05c8a16f327 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_allen, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, version=7, vcs-type=git, RELEASE=main) Nov 23 04:49:16 localhost podman[290356]: 2025-11-23 09:49:16.481325349 +0000 UTC m=+0.025776604 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:49:16 localhost podman[290356]: 2025-11-23 09:49:16.584425282 +0000 UTC m=+0.128876487 container start db86cbc44c4b3c1908bd90970fca7ef61919e022f49cea176ba7a05c8a16f327 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_allen, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, architecture=x86_64, version=7, maintainer=Guillaume Abrioux , name=rhceph, release=553) Nov 23 04:49:16 localhost podman[290356]: 2025-11-23 09:49:16.58467325 +0000 UTC m=+0.129124455 container attach db86cbc44c4b3c1908bd90970fca7ef61919e022f49cea176ba7a05c8a16f327 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_allen, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, release=553, vendor=Red Hat, Inc., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, ceph=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, RELEASE=main) Nov 23 04:49:16 localhost vibrant_allen[290371]: 167 167 Nov 23 04:49:16 localhost systemd[1]: libpod-db86cbc44c4b3c1908bd90970fca7ef61919e022f49cea176ba7a05c8a16f327.scope: Deactivated successfully. Nov 23 04:49:16 localhost podman[290356]: 2025-11-23 09:49:16.58857616 +0000 UTC m=+0.133027415 container died db86cbc44c4b3c1908bd90970fca7ef61919e022f49cea176ba7a05c8a16f327 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_allen, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, version=7, GIT_CLEAN=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:49:16 localhost podman[290376]: 2025-11-23 09:49:16.680338804 +0000 UTC m=+0.080156798 container remove db86cbc44c4b3c1908bd90970fca7ef61919e022f49cea176ba7a05c8a16f327 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_allen, vcs-type=git, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, architecture=x86_64, name=rhceph, ceph=True, GIT_CLEAN=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, RELEASE=main, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:49:16 localhost systemd[1]: libpod-conmon-db86cbc44c4b3c1908bd90970fca7ef61919e022f49cea176ba7a05c8a16f327.scope: Deactivated successfully. Nov 23 04:49:16 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:16 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:16 localhost ceph-mon[288117]: Reconfiguring osd.5 (monmap changed)... Nov 23 04:49:16 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 23 04:49:17 localhost podman[239764]: time="2025-11-23T09:49:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:49:17 localhost podman[239764]: @ - - [23/Nov/2025:09:49:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:49:17 localhost podman[239764]: @ - - [23/Nov/2025:09:49:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18205 "" "Go-http-client/1.1" Nov 23 04:49:17 localhost systemd[1]: var-lib-containers-storage-overlay-f7052aa7e6661fecaf36502e1e5bf79f0f91b07735aeb6ee57ab368a671b4cf7-merged.mount: Deactivated successfully. Nov 23 04:49:17 localhost podman[290452]: Nov 23 04:49:17 localhost podman[290452]: 2025-11-23 09:49:17.540111816 +0000 UTC m=+0.076686300 container create 228c13f610b87ab3c91104f1d7f91c62303a65fb5764639cdb6cc641d8bea9c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_greider, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , release=553, version=7, GIT_CLEAN=True, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55) Nov 23 04:49:17 localhost systemd[1]: Started libpod-conmon-228c13f610b87ab3c91104f1d7f91c62303a65fb5764639cdb6cc641d8bea9c5.scope. Nov 23 04:49:17 localhost systemd[1]: Started libcrun container. Nov 23 04:49:17 localhost podman[290452]: 2025-11-23 09:49:17.599491644 +0000 UTC m=+0.136066128 container init 228c13f610b87ab3c91104f1d7f91c62303a65fb5764639cdb6cc641d8bea9c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_greider, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, release=553, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, RELEASE=main, com.redhat.component=rhceph-container, version=7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main) Nov 23 04:49:17 localhost podman[290452]: 2025-11-23 09:49:17.608837311 +0000 UTC m=+0.145411795 container start 228c13f610b87ab3c91104f1d7f91c62303a65fb5764639cdb6cc641d8bea9c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_greider, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc.) Nov 23 04:49:17 localhost podman[290452]: 2025-11-23 09:49:17.609049049 +0000 UTC m=+0.145623533 container attach 228c13f610b87ab3c91104f1d7f91c62303a65fb5764639cdb6cc641d8bea9c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_greider, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, RELEASE=main, release=553, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, GIT_BRANCH=main) Nov 23 04:49:17 localhost podman[290452]: 2025-11-23 09:49:17.510956249 +0000 UTC m=+0.047530764 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:49:17 localhost unruffled_greider[290467]: 167 167 Nov 23 04:49:17 localhost systemd[1]: libpod-228c13f610b87ab3c91104f1d7f91c62303a65fb5764639cdb6cc641d8bea9c5.scope: Deactivated successfully. Nov 23 04:49:17 localhost podman[290452]: 2025-11-23 09:49:17.612054911 +0000 UTC m=+0.148629395 container died 228c13f610b87ab3c91104f1d7f91c62303a65fb5764639cdb6cc641d8bea9c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_greider, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, RELEASE=main, io.openshift.expose-services=, ceph=True, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, release=553, distribution-scope=public, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, name=rhceph, version=7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:49:17 localhost podman[290472]: 2025-11-23 09:49:17.705793816 +0000 UTC m=+0.081854310 container remove 228c13f610b87ab3c91104f1d7f91c62303a65fb5764639cdb6cc641d8bea9c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_greider, maintainer=Guillaume Abrioux , io.openshift.expose-services=, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, release=553, ceph=True, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, architecture=x86_64, io.buildah.version=1.33.12) Nov 23 04:49:17 localhost systemd[1]: libpod-conmon-228c13f610b87ab3c91104f1d7f91c62303a65fb5764639cdb6cc641d8bea9c5.scope: Deactivated successfully. Nov 23 04:49:17 localhost ceph-mon[288117]: Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:49:17 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:17 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:17 localhost ceph-mon[288117]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)... Nov 23 04:49:17 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:49:17 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:49:17 localhost ceph-mon[288117]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain Nov 23 04:49:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:49:18 localhost podman[290507]: 2025-11-23 09:49:18.216444892 +0000 UTC m=+0.088518595 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, version=9.6, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 04:49:18 localhost podman[290507]: 2025-11-23 09:49:18.23002609 +0000 UTC m=+0.102099793 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, version=9.6, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350) Nov 23 04:49:18 localhost ceph-mon[288117]: mon.np0005532584@5(peon).osd e82 _set_new_cache_sizes cache_size:1020054514 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:49:18 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:49:18 localhost systemd[1]: var-lib-containers-storage-overlay-18c6b6eeeb09280b8d48c6f0a5ece4e000c26ee65f2a8378541a266739cd19fc-merged.mount: Deactivated successfully. Nov 23 04:49:18 localhost podman[290563]: Nov 23 04:49:18 localhost podman[290563]: 2025-11-23 09:49:18.659278082 +0000 UTC m=+0.085232324 container create 4896428f074fcdf739f487a23ce8c844aa9736b0de872349172d664611f360b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_wilson, release=553, name=rhceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=7, CEPH_POINT_RELEASE=, ceph=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main) Nov 23 04:49:18 localhost systemd[1]: Started libpod-conmon-4896428f074fcdf739f487a23ce8c844aa9736b0de872349172d664611f360b2.scope. Nov 23 04:49:18 localhost systemd[1]: Started libcrun container. Nov 23 04:49:18 localhost podman[290563]: 2025-11-23 09:49:18.626301008 +0000 UTC m=+0.052255270 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:49:18 localhost podman[290563]: 2025-11-23 09:49:18.733707223 +0000 UTC m=+0.159661455 container init 4896428f074fcdf739f487a23ce8c844aa9736b0de872349172d664611f360b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_wilson, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, distribution-scope=public, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, name=rhceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, release=553) Nov 23 04:49:18 localhost podman[290563]: 2025-11-23 09:49:18.742667189 +0000 UTC m=+0.168621451 container start 4896428f074fcdf739f487a23ce8c844aa9736b0de872349172d664611f360b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_wilson, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., architecture=x86_64, release=553, name=rhceph, com.redhat.component=rhceph-container, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.tags=rhceph ceph) Nov 23 04:49:18 localhost podman[290563]: 2025-11-23 09:49:18.743588708 +0000 UTC m=+0.169542960 container attach 4896428f074fcdf739f487a23ce8c844aa9736b0de872349172d664611f360b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_wilson, GIT_CLEAN=True, io.openshift.expose-services=, vcs-type=git, RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, GIT_BRANCH=main, vendor=Red Hat, Inc., name=rhceph, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, distribution-scope=public, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:49:18 localhost focused_wilson[290578]: 167 167 Nov 23 04:49:18 localhost systemd[1]: libpod-4896428f074fcdf739f487a23ce8c844aa9736b0de872349172d664611f360b2.scope: Deactivated successfully. Nov 23 04:49:18 localhost podman[290563]: 2025-11-23 09:49:18.750220231 +0000 UTC m=+0.176174503 container died 4896428f074fcdf739f487a23ce8c844aa9736b0de872349172d664611f360b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_wilson, distribution-scope=public, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:49:18 localhost podman[290583]: 2025-11-23 09:49:18.859162564 +0000 UTC m=+0.096421179 container remove 4896428f074fcdf739f487a23ce8c844aa9736b0de872349172d664611f360b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_wilson, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, vcs-type=git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, RELEASE=main, maintainer=Guillaume Abrioux , architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 04:49:18 localhost systemd[1]: libpod-conmon-4896428f074fcdf739f487a23ce8c844aa9736b0de872349172d664611f360b2.scope: Deactivated successfully. Nov 23 04:49:18 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:18 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:18 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:18 localhost ceph-mon[288117]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)... Nov 23 04:49:18 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:49:18 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:49:18 localhost ceph-mon[288117]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain Nov 23 04:49:18 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:18 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:18 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:49:19 localhost systemd[1]: var-lib-containers-storage-overlay-6b8f572412952e81e426183aef64d6ba3660e99cc3809c60b1df6a209c090021-merged.mount: Deactivated successfully. Nov 23 04:49:19 localhost podman[290654]: Nov 23 04:49:19 localhost podman[290654]: 2025-11-23 09:49:19.567399303 +0000 UTC m=+0.078144987 container create 59f75b3f9c2281c73f62d9002f8277e95b1e1e89f766f7f856d5fd5495c51d93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_chatterjee, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, description=Red Hat Ceph Storage 7, version=7, RELEASE=main, ceph=True, CEPH_POINT_RELEASE=, distribution-scope=public, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.openshift.expose-services=) Nov 23 04:49:19 localhost systemd[1]: Started libpod-conmon-59f75b3f9c2281c73f62d9002f8277e95b1e1e89f766f7f856d5fd5495c51d93.scope. Nov 23 04:49:19 localhost systemd[1]: Started libcrun container. Nov 23 04:49:19 localhost podman[290654]: 2025-11-23 09:49:19.633665332 +0000 UTC m=+0.144411016 container init 59f75b3f9c2281c73f62d9002f8277e95b1e1e89f766f7f856d5fd5495c51d93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_chatterjee, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, release=553, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., version=7, RELEASE=main, io.openshift.expose-services=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 23 04:49:19 localhost podman[290654]: 2025-11-23 09:49:19.537659467 +0000 UTC m=+0.048405181 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:49:19 localhost podman[290654]: 2025-11-23 09:49:19.642504243 +0000 UTC m=+0.153249927 container start 59f75b3f9c2281c73f62d9002f8277e95b1e1e89f766f7f856d5fd5495c51d93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_chatterjee, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, release=553, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, name=rhceph, GIT_CLEAN=True, vcs-type=git, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 04:49:19 localhost podman[290654]: 2025-11-23 09:49:19.64273221 +0000 UTC m=+0.153477904 container attach 59f75b3f9c2281c73f62d9002f8277e95b1e1e89f766f7f856d5fd5495c51d93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_chatterjee, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-type=git, vendor=Red Hat, Inc., ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=) Nov 23 04:49:19 localhost friendly_chatterjee[290670]: 167 167 Nov 23 04:49:19 localhost systemd[1]: libpod-59f75b3f9c2281c73f62d9002f8277e95b1e1e89f766f7f856d5fd5495c51d93.scope: Deactivated successfully. Nov 23 04:49:19 localhost podman[290654]: 2025-11-23 09:49:19.646418445 +0000 UTC m=+0.157164139 container died 59f75b3f9c2281c73f62d9002f8277e95b1e1e89f766f7f856d5fd5495c51d93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_chatterjee, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, release=553, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=) Nov 23 04:49:19 localhost podman[290675]: 2025-11-23 09:49:19.739954723 +0000 UTC m=+0.084330926 container remove 59f75b3f9c2281c73f62d9002f8277e95b1e1e89f766f7f856d5fd5495c51d93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_chatterjee, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, RELEASE=main, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public) Nov 23 04:49:19 localhost systemd[1]: libpod-conmon-59f75b3f9c2281c73f62d9002f8277e95b1e1e89f766f7f856d5fd5495c51d93.scope: Deactivated successfully. Nov 23 04:49:20 localhost systemd[1]: var-lib-containers-storage-overlay-d9222880c74e48b43382871334282d20aa9fd2dae47525f318f608e6f23c90e1-merged.mount: Deactivated successfully. Nov 23 04:49:20 localhost ceph-mon[288117]: Reconfiguring mon.np0005532584 (monmap changed)... Nov 23 04:49:20 localhost ceph-mon[288117]: Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain Nov 23 04:49:20 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:20 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:20 localhost ceph-mon[288117]: Reconfiguring crash.np0005532585 (monmap changed)... Nov 23 04:49:20 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:49:20 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:49:20 localhost ceph-mon[288117]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain Nov 23 04:49:20 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:20 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:20 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 23 04:49:21 localhost ceph-mon[288117]: Reconfiguring osd.0 (monmap changed)... Nov 23 04:49:21 localhost ceph-mon[288117]: Reconfiguring daemon osd.0 on np0005532585.localdomain Nov 23 04:49:21 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:21 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:21 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 23 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:49:22 localhost ceph-mon[288117]: Reconfiguring osd.3 (monmap changed)... Nov 23 04:49:22 localhost ceph-mon[288117]: Reconfiguring daemon osd.3 on np0005532585.localdomain Nov 23 04:49:22 localhost podman[290692]: 2025-11-23 09:49:22.896281329 +0000 UTC m=+0.067724775 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:49:22 localhost podman[290692]: 2025-11-23 09:49:22.906186693 +0000 UTC m=+0.077630159 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:49:22 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:49:22 localhost podman[290691]: 2025-11-23 09:49:22.96650263 +0000 UTC m=+0.141122674 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:49:23 localhost podman[290691]: 2025-11-23 09:49:23.006503241 +0000 UTC m=+0.181123345 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118) Nov 23 04:49:23 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:49:23 localhost ceph-mon[288117]: mon.np0005532584@5(peon).osd e82 _set_new_cache_sizes cache_size:1020054726 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:49:23 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:23 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:23 localhost ceph-mon[288117]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)... Nov 23 04:49:23 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:49:23 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:49:23 localhost ceph-mon[288117]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain Nov 23 04:49:23 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:23 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' Nov 23 04:49:23 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:49:23 localhost ceph-mon[288117]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:49:24 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55cf28e19600 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Nov 23 04:49:24 localhost ceph-mgr[286671]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Nov 23 04:49:24 localhost ceph-mgr[286671]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Nov 23 04:49:24 localhost ceph-mon[288117]: mon.np0005532584@5(peon) e7 my rank is now 4 (was 5) Nov 23 04:49:24 localhost ceph-mgr[286671]: client.0 ms_handle_reset on v2:172.18.0.106:3300/0 Nov 23 04:49:24 localhost ceph-mgr[286671]: client.0 ms_handle_reset on v2:172.18.0.106:3300/0 Nov 23 04:49:24 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55cf28e191e0 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Nov 23 04:49:24 localhost ceph-mon[288117]: log_channel(cluster) log [INF] : mon.np0005532584 calling monitor election Nov 23 04:49:24 localhost ceph-mon[288117]: paxos.4).electionLogic(26) init, last seen epoch 26 Nov 23 04:49:24 localhost ceph-mon[288117]: mon.np0005532584@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:49:24 localhost ceph-mon[288117]: mon.np0005532584@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:49:24 localhost ceph-mon[288117]: mon.np0005532584@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:49:24 localhost ceph-mon[288117]: mon.np0005532584@4(peon) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:49:24 localhost ceph-mon[288117]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)... Nov 23 04:49:24 localhost ceph-mon[288117]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain Nov 23 04:49:24 localhost ceph-mon[288117]: Remove daemons mon.np0005532581 Nov 23 04:49:24 localhost ceph-mon[288117]: Safe to remove mon.np0005532581: new quorum should be ['np0005532583', 'np0005532582', 'np0005532586', 'np0005532585', 'np0005532584'] (from ['np0005532583', 'np0005532582', 'np0005532586', 'np0005532585', 'np0005532584']) Nov 23 04:49:24 localhost ceph-mon[288117]: Removing monitor np0005532581 from monmap... Nov 23 04:49:24 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "mon rm", "name": "np0005532581"} : dispatch Nov 23 04:49:24 localhost ceph-mon[288117]: Removing daemon mon.np0005532581 from np0005532581.localdomain -- ports [] Nov 23 04:49:24 localhost ceph-mon[288117]: mon.np0005532586 calling monitor election Nov 23 04:49:24 localhost ceph-mon[288117]: mon.np0005532582 calling monitor election Nov 23 04:49:24 localhost ceph-mon[288117]: mon.np0005532583 calling monitor election Nov 23 04:49:24 localhost ceph-mon[288117]: mon.np0005532585 calling monitor election Nov 23 04:49:24 localhost ceph-mon[288117]: mon.np0005532584 calling monitor election Nov 23 04:49:24 localhost ceph-mon[288117]: mon.np0005532583 is new leader, mons np0005532583,np0005532582,np0005532586,np0005532585,np0005532584 in quorum (ranks 0,1,2,3,4) Nov 23 04:49:24 localhost ceph-mon[288117]: overall HEALTH_OK Nov 23 04:49:24 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:24 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:24 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:49:25 localhost ceph-mon[288117]: Reconfiguring mon.np0005532585 (monmap changed)... Nov 23 04:49:25 localhost ceph-mon[288117]: Reconfiguring daemon mon.np0005532585 on np0005532585.localdomain Nov 23 04:49:25 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:25 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:25 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:49:26 localhost ceph-mon[288117]: Reconfiguring crash.np0005532586 (monmap changed)... Nov 23 04:49:26 localhost ceph-mon[288117]: Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain Nov 23 04:49:26 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:26 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:26 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 23 04:49:27 localhost ceph-mon[288117]: Reconfiguring osd.1 (monmap changed)... Nov 23 04:49:27 localhost ceph-mon[288117]: Reconfiguring daemon osd.1 on np0005532586.localdomain Nov 23 04:49:27 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:27 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:27 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 23 04:49:27 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:28 localhost ceph-mon[288117]: mon.np0005532584@4(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:49:28 localhost ceph-mon[288117]: Reconfiguring osd.4 (monmap changed)... Nov 23 04:49:28 localhost ceph-mon[288117]: Reconfiguring daemon osd.4 on np0005532586.localdomain Nov 23 04:49:28 localhost ceph-mon[288117]: Removed label mon from host np0005532581.localdomain Nov 23 04:49:28 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:28 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:28 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:49:29 localhost ceph-mon[288117]: Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)... Nov 23 04:49:29 localhost ceph-mon[288117]: Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain Nov 23 04:49:29 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:29 localhost ceph-mon[288117]: Removed label mgr from host np0005532581.localdomain Nov 23 04:49:29 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:29 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:29 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:49:30 localhost ceph-mon[288117]: Reconfiguring mgr.np0005532586.thmvqb (monmap changed)... Nov 23 04:49:30 localhost ceph-mon[288117]: Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain Nov 23 04:49:30 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:30 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:30 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:49:30 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:49:31 localhost ceph-mon[288117]: Reconfiguring mon.np0005532586 (monmap changed)... Nov 23 04:49:31 localhost ceph-mon[288117]: Reconfiguring daemon mon.np0005532586 on np0005532586.localdomain Nov 23 04:49:31 localhost ceph-mon[288117]: Removed label _admin from host np0005532581.localdomain Nov 23 04:49:31 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:31 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:31 localhost systemd[1]: tmp-crun.ywZduc.mount: Deactivated successfully. Nov 23 04:49:31 localhost podman[290735]: 2025-11-23 09:49:31.910486871 +0000 UTC m=+0.092717165 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 04:49:31 localhost podman[290735]: 2025-11-23 09:49:31.921274852 +0000 UTC m=+0.103505186 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 04:49:31 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:49:32 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:32 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:32 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:49:32 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:32 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:33 localhost ceph-mon[288117]: mon.np0005532584@4(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:33.934183) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891373934258, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 11881, "num_deletes": 513, "total_data_size": 17522758, "memory_usage": 18007296, "flush_reason": "Manual Compaction"} Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Nov 23 04:49:33 localhost ceph-mon[288117]: Removing np0005532581.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:49:33 localhost ceph-mon[288117]: Updating np0005532582.localdomain:/etc/ceph/ceph.conf Nov 23 04:49:33 localhost ceph-mon[288117]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf Nov 23 04:49:33 localhost ceph-mon[288117]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:49:33 localhost ceph-mon[288117]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:49:33 localhost ceph-mon[288117]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:49:33 localhost ceph-mon[288117]: Removing np0005532581.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:49:33 localhost ceph-mon[288117]: Removing np0005532581.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:49:33 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:33 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:33 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:33 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:33 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:33 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:33 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:33 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:33 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:33 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891373987978, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 12135464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 11886, "table_properties": {"data_size": 12079081, "index_size": 30101, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25221, "raw_key_size": 263943, "raw_average_key_size": 26, "raw_value_size": 11907722, "raw_average_value_size": 1181, "num_data_blocks": 1146, "num_entries": 10082, "num_filter_entries": 10082, "num_deletions": 512, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891323, "oldest_key_time": 1763891323, "file_creation_time": 1763891373, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b789f4ce-9d2b-45a3-864d-0b3de17929ef", "db_session_id": "DTL4VO9AR2HPWFJ8W48F", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 53919 microseconds, and 26019 cpu microseconds. Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:33.988098) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 12135464 bytes OK Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:33.988126) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:33.989941) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:33.989962) EVENT_LOG_v1 {"time_micros": 1763891373989957, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:33.989978) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 17444478, prev total WAL file size 17469189, number of live WAL files 2. Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:33.992568) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end) Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(11MB) 8(2012B)] Nov 23 04:49:33 localhost ceph-mon[288117]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891373992705, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 12137476, "oldest_snapshot_seqno": -1} Nov 23 04:49:34 localhost ceph-mon[288117]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9573 keys, 12127639 bytes, temperature: kUnknown Nov 23 04:49:34 localhost ceph-mon[288117]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891374052626, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 12127639, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12072584, "index_size": 30058, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23941, "raw_key_size": 255527, "raw_average_key_size": 26, "raw_value_size": 11907780, "raw_average_value_size": 1243, "num_data_blocks": 1144, "num_entries": 9573, "num_filter_entries": 9573, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891323, "oldest_key_time": 0, "file_creation_time": 1763891373, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b789f4ce-9d2b-45a3-864d-0b3de17929ef", "db_session_id": "DTL4VO9AR2HPWFJ8W48F", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Nov 23 04:49:34 localhost ceph-mon[288117]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:49:34 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:34.052974) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 12127639 bytes Nov 23 04:49:34 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:34.054567) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.3 rd, 202.1 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(11.6, 0.0 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 10087, records dropped: 514 output_compression: NoCompression Nov 23 04:49:34 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:34.054596) EVENT_LOG_v1 {"time_micros": 1763891374054583, "job": 4, "event": "compaction_finished", "compaction_time_micros": 60003, "compaction_time_cpu_micros": 35238, "output_level": 6, "num_output_files": 1, "total_output_size": 12127639, "num_input_records": 10087, "num_output_records": 9573, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 04:49:34 localhost ceph-mon[288117]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:49:34 localhost ceph-mon[288117]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891374056364, "job": 4, "event": "table_file_deletion", "file_number": 14} Nov 23 04:49:34 localhost ceph-mon[288117]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:49:34 localhost ceph-mon[288117]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891374056408, "job": 4, "event": "table_file_deletion", "file_number": 8} Nov 23 04:49:34 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:33.992443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:49:34 localhost ceph-mon[288117]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:49:34 localhost ceph-mon[288117]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:49:34 localhost ceph-mon[288117]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:49:34 localhost ceph-mon[288117]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:49:34 localhost ceph-mon[288117]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:49:34 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:34 localhost ceph-mon[288117]: Removing daemon mgr.np0005532581.sxlgsx from np0005532581.localdomain -- ports [9283, 8765] Nov 23 04:49:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:49:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:49:36 localhost systemd[1]: tmp-crun.il6F2E.mount: Deactivated successfully. Nov 23 04:49:36 localhost podman[291091]: 2025-11-23 09:49:36.665186961 +0000 UTC m=+0.089011601 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 23 04:49:36 localhost podman[291092]: 2025-11-23 09:49:36.706901745 +0000 UTC m=+0.129003811 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:49:36 localhost openstack_network_exporter[241732]: ERROR 09:49:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:49:36 localhost openstack_network_exporter[241732]: ERROR 09:49:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:49:36 localhost openstack_network_exporter[241732]: ERROR 09:49:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:49:36 localhost openstack_network_exporter[241732]: ERROR 09:49:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:49:36 localhost openstack_network_exporter[241732]: Nov 23 04:49:36 localhost openstack_network_exporter[241732]: ERROR 09:49:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:49:36 localhost openstack_network_exporter[241732]: Nov 23 04:49:36 localhost podman[291091]: 2025-11-23 09:49:36.732593116 +0000 UTC m=+0.156417806 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:49:36 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:49:36 localhost podman[291092]: 2025-11-23 09:49:36.774733233 +0000 UTC m=+0.196835289 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller) Nov 23 04:49:36 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:49:37 localhost ceph-mon[288117]: Removing key for mgr.np0005532581.sxlgsx Nov 23 04:49:37 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth rm", "entity": "mgr.np0005532581.sxlgsx"} : dispatch Nov 23 04:49:37 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005532581.sxlgsx"}]': finished Nov 23 04:49:37 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:37 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:49:37 localhost podman[291135]: 2025-11-23 09:49:37.918919658 +0000 UTC m=+0.105686823 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:49:37 localhost podman[291135]: 2025-11-23 09:49:37.926562164 +0000 UTC m=+0.113329329 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 04:49:37 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:49:38 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:38 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:38 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:38 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:49:38 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:38 localhost ceph-mon[288117]: mon.np0005532584@4(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:49:39 localhost ceph-mon[288117]: Reconfiguring crash.np0005532581 (monmap changed)... Nov 23 04:49:39 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532581.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:49:39 localhost ceph-mon[288117]: Reconfiguring daemon crash.np0005532581 on np0005532581.localdomain Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:39.790088) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891379790147, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 462, "num_deletes": 256, "total_data_size": 303488, "memory_usage": 313272, "flush_reason": "Manual Compaction"} Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891379799399, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 194587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11891, "largest_seqno": 12348, "table_properties": {"data_size": 191999, "index_size": 571, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7023, "raw_average_key_size": 19, "raw_value_size": 186389, "raw_average_value_size": 514, "num_data_blocks": 26, "num_entries": 362, "num_filter_entries": 362, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891373, "oldest_key_time": 1763891373, "file_creation_time": 1763891379, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b789f4ce-9d2b-45a3-864d-0b3de17929ef", "db_session_id": "DTL4VO9AR2HPWFJ8W48F", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 9351 microseconds, and 1455 cpu microseconds. Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:39.799442) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 194587 bytes OK Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:39.799464) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:39.801660) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:39.801679) EVENT_LOG_v1 {"time_micros": 1763891379801673, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:39.801700) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 300514, prev total WAL file size 300838, number of live WAL files 2. Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:39.802412) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353231' seq:72057594037927935, type:22 .. '6C6F676D0033373734' seq:0, type:0; will stop at (end) Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(190KB)], [15(11MB)] Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891379802478, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 12322226, "oldest_snapshot_seqno": -1} Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 9401 keys, 12213973 bytes, temperature: kUnknown Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891379864730, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 12213973, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12159233, "index_size": 30127, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23557, "raw_key_size": 253102, "raw_average_key_size": 26, "raw_value_size": 11996726, "raw_average_value_size": 1276, "num_data_blocks": 1145, "num_entries": 9401, "num_filter_entries": 9401, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891323, "oldest_key_time": 0, "file_creation_time": 1763891379, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b789f4ce-9d2b-45a3-864d-0b3de17929ef", "db_session_id": "DTL4VO9AR2HPWFJ8W48F", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:39.865089) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12213973 bytes Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:39.867154) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.7 rd, 196.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.6 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(126.1) write-amplify(62.8) OK, records in: 9935, records dropped: 534 output_compression: NoCompression Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:39.867190) EVENT_LOG_v1 {"time_micros": 1763891379867174, "job": 6, "event": "compaction_finished", "compaction_time_micros": 62321, "compaction_time_cpu_micros": 35390, "output_level": 6, "num_output_files": 1, "total_output_size": 12213973, "num_input_records": 9935, "num_output_records": 9401, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891379867360, "job": 6, "event": "table_file_deletion", "file_number": 17} Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891379869537, "job": 6, "event": "table_file_deletion", "file_number": 15} Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:39.802280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:39.869588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:39.869596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:39.869599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:39.869602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:49:39 localhost ceph-mon[288117]: rocksdb: (Original Log Time 2025/11/23-09:49:39.869605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:49:40 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:40 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:40 localhost ceph-mon[288117]: Reconfiguring crash.np0005532582 (monmap changed)... Nov 23 04:49:40 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532582.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:49:40 localhost ceph-mon[288117]: Reconfiguring daemon crash.np0005532582 on np0005532582.localdomain Nov 23 04:49:40 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:40 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:41 localhost ceph-mon[288117]: Reconfiguring mon.np0005532582 (monmap changed)... Nov 23 04:49:41 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:49:41 localhost ceph-mon[288117]: Reconfiguring daemon mon.np0005532582 on np0005532582.localdomain Nov 23 04:49:41 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:41 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:41 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532582.gilwrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:49:42 localhost ceph-mon[288117]: Reconfiguring mgr.np0005532582.gilwrz (monmap changed)... Nov 23 04:49:42 localhost ceph-mon[288117]: Reconfiguring daemon mgr.np0005532582.gilwrz on np0005532582.localdomain Nov 23 04:49:42 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:42 localhost ceph-mon[288117]: Added label _no_schedule to host np0005532581.localdomain Nov 23 04:49:42 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:42 localhost ceph-mon[288117]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005532581.localdomain Nov 23 04:49:42 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:42 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:42 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:49:43 localhost ceph-mon[288117]: mon.np0005532584@4(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:49:43 localhost ceph-mon[288117]: Reconfiguring mon.np0005532583 (monmap changed)... Nov 23 04:49:43 localhost ceph-mon[288117]: Reconfiguring daemon mon.np0005532583 on np0005532583.localdomain Nov 23 04:49:43 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:43 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:43 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:43 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532583.orhywt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:49:44 localhost ceph-mon[288117]: Reconfiguring mgr.np0005532583.orhywt (monmap changed)... Nov 23 04:49:44 localhost ceph-mon[288117]: Reconfiguring daemon mgr.np0005532583.orhywt on np0005532583.localdomain Nov 23 04:49:44 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:44 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:44 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:49:44 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:44 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532581.localdomain"} : dispatch Nov 23 04:49:44 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532581.localdomain"}]': finished Nov 23 04:49:45 localhost podman[291230]: Nov 23 04:49:45 localhost podman[291230]: 2025-11-23 09:49:45.460130973 +0000 UTC m=+0.077923349 container create a7b1653514bcb817573a171163bfba6284a607c8212900d600820f7874364a68 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_noether, vcs-type=git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_CLEAN=True, distribution-scope=public, ceph=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, release=553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:49:45 localhost systemd[1]: Started libpod-conmon-a7b1653514bcb817573a171163bfba6284a607c8212900d600820f7874364a68.scope. Nov 23 04:49:45 localhost systemd[1]: Started libcrun container. Nov 23 04:49:45 localhost podman[291230]: 2025-11-23 09:49:45.425156577 +0000 UTC m=+0.042948963 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:49:45 localhost podman[291230]: 2025-11-23 09:49:45.53052695 +0000 UTC m=+0.148319336 container init a7b1653514bcb817573a171163bfba6284a607c8212900d600820f7874364a68 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_noether, CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, com.redhat.component=rhceph-container, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:49:45 localhost podman[291230]: 2025-11-23 09:49:45.540243889 +0000 UTC m=+0.158036275 container start a7b1653514bcb817573a171163bfba6284a607c8212900d600820f7874364a68 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_noether, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:49:45 localhost podman[291230]: 2025-11-23 09:49:45.540875488 +0000 UTC m=+0.158667904 container attach a7b1653514bcb817573a171163bfba6284a607c8212900d600820f7874364a68 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_noether, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_CLEAN=True, version=7, RELEASE=main, com.redhat.component=rhceph-container, name=rhceph, distribution-scope=public, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., release=553, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:49:45 localhost objective_noether[291245]: 167 167 Nov 23 04:49:45 localhost systemd[1]: libpod-a7b1653514bcb817573a171163bfba6284a607c8212900d600820f7874364a68.scope: Deactivated successfully. Nov 23 04:49:45 localhost podman[291230]: 2025-11-23 09:49:45.544152959 +0000 UTC m=+0.161945355 container died a7b1653514bcb817573a171163bfba6284a607c8212900d600820f7874364a68 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_noether, vcs-type=git, release=553, GIT_CLEAN=True, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, ceph=True, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 23 04:49:45 localhost podman[291250]: 2025-11-23 09:49:45.633906121 +0000 UTC m=+0.081261402 container remove a7b1653514bcb817573a171163bfba6284a607c8212900d600820f7874364a68 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_noether, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, ceph=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux , name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:49:45 localhost systemd[1]: libpod-conmon-a7b1653514bcb817573a171163bfba6284a607c8212900d600820f7874364a68.scope: Deactivated successfully. Nov 23 04:49:45 localhost ceph-mon[288117]: Reconfiguring crash.np0005532583 (monmap changed)... Nov 23 04:49:45 localhost ceph-mon[288117]: Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain Nov 23 04:49:45 localhost ceph-mon[288117]: Removed host np0005532581.localdomain Nov 23 04:49:45 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:45 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:45 localhost ceph-mon[288117]: Reconfiguring crash.np0005532584 (monmap changed)... Nov 23 04:49:45 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:49:45 localhost ceph-mon[288117]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain Nov 23 04:49:45 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:45 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:45 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 23 04:49:46 localhost sshd[291315]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:49:46 localhost podman[291321]: Nov 23 04:49:46 localhost podman[291321]: 2025-11-23 09:49:46.32685658 +0000 UTC m=+0.080032535 container create ac8c46704e8fb51ed89013f1d99fa3a8ad2ac9ed376295ca54952221a8977266 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_diffie, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, RELEASE=main, name=rhceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7, release=553, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 04:49:46 localhost systemd-logind[760]: New session 65 of user tripleo-admin. Nov 23 04:49:46 localhost systemd[1]: Created slice User Slice of UID 1003. Nov 23 04:49:46 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Nov 23 04:49:46 localhost systemd[1]: Started libpod-conmon-ac8c46704e8fb51ed89013f1d99fa3a8ad2ac9ed376295ca54952221a8977266.scope. Nov 23 04:49:46 localhost systemd[1]: Started libcrun container. Nov 23 04:49:46 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Nov 23 04:49:46 localhost systemd[1]: Starting User Manager for UID 1003... Nov 23 04:49:46 localhost podman[291321]: 2025-11-23 09:49:46.380698807 +0000 UTC m=+0.133874782 container init ac8c46704e8fb51ed89013f1d99fa3a8ad2ac9ed376295ca54952221a8977266 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_diffie, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=553) Nov 23 04:49:46 localhost podman[291321]: 2025-11-23 09:49:46.390267161 +0000 UTC m=+0.143443106 container start ac8c46704e8fb51ed89013f1d99fa3a8ad2ac9ed376295ca54952221a8977266 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_diffie, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., version=7, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=) Nov 23 04:49:46 localhost podman[291321]: 2025-11-23 09:49:46.39054823 +0000 UTC m=+0.143724205 container attach ac8c46704e8fb51ed89013f1d99fa3a8ad2ac9ed376295ca54952221a8977266 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_diffie, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_BRANCH=main, ceph=True, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, com.redhat.component=rhceph-container) Nov 23 04:49:46 localhost podman[291321]: 2025-11-23 09:49:46.2927549 +0000 UTC m=+0.045930905 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:49:46 localhost interesting_diffie[291338]: 167 167 Nov 23 04:49:46 localhost systemd[1]: libpod-ac8c46704e8fb51ed89013f1d99fa3a8ad2ac9ed376295ca54952221a8977266.scope: Deactivated successfully. Nov 23 04:49:46 localhost podman[291321]: 2025-11-23 09:49:46.393518282 +0000 UTC m=+0.146694247 container died ac8c46704e8fb51ed89013f1d99fa3a8ad2ac9ed376295ca54952221a8977266 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_diffie, architecture=x86_64, name=rhceph, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, ceph=True, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, release=553, io.openshift.expose-services=, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git) Nov 23 04:49:46 localhost systemd[1]: var-lib-containers-storage-overlay-afdbc25f89e00d7bdd853fa3a96c993fea211cf3fd1d74a30b69eb26f5f7fbe7-merged.mount: Deactivated successfully. Nov 23 04:49:46 localhost systemd[1]: var-lib-containers-storage-overlay-06ecbf6ab735983521aa428d152ff535439f680c5b190a51ed6f56be3654a684-merged.mount: Deactivated successfully. Nov 23 04:49:46 localhost podman[291345]: 2025-11-23 09:49:46.496758988 +0000 UTC m=+0.090492936 container remove ac8c46704e8fb51ed89013f1d99fa3a8ad2ac9ed376295ca54952221a8977266 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_diffie, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, release=553, GIT_CLEAN=True, io.buildah.version=1.33.12, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=) Nov 23 04:49:46 localhost systemd[1]: libpod-conmon-ac8c46704e8fb51ed89013f1d99fa3a8ad2ac9ed376295ca54952221a8977266.scope: Deactivated successfully. Nov 23 04:49:46 localhost systemd[291341]: Queued start job for default target Main User Target. Nov 23 04:49:46 localhost systemd[291341]: Created slice User Application Slice. Nov 23 04:49:46 localhost systemd[291341]: Started Mark boot as successful after the user session has run 2 minutes. Nov 23 04:49:46 localhost systemd[291341]: Started Daily Cleanup of User's Temporary Directories. Nov 23 04:49:46 localhost systemd[291341]: Reached target Paths. Nov 23 04:49:46 localhost systemd[291341]: Reached target Timers. Nov 23 04:49:46 localhost systemd[291341]: Starting D-Bus User Message Bus Socket... Nov 23 04:49:46 localhost systemd[291341]: Starting Create User's Volatile Files and Directories... Nov 23 04:49:46 localhost systemd[291341]: Listening on D-Bus User Message Bus Socket. Nov 23 04:49:46 localhost systemd[291341]: Reached target Sockets. Nov 23 04:49:46 localhost systemd[291341]: Finished Create User's Volatile Files and Directories. Nov 23 04:49:46 localhost systemd[291341]: Reached target Basic System. Nov 23 04:49:46 localhost systemd[291341]: Reached target Main User Target. Nov 23 04:49:46 localhost systemd[291341]: Startup finished in 162ms. Nov 23 04:49:46 localhost systemd[1]: Started User Manager for UID 1003. Nov 23 04:49:46 localhost systemd[1]: Started Session 65 of User tripleo-admin. Nov 23 04:49:46 localhost ceph-mon[288117]: Reconfiguring osd.2 (monmap changed)... Nov 23 04:49:46 localhost ceph-mon[288117]: Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:49:46 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:46 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:46 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 23 04:49:47 localhost podman[239764]: time="2025-11-23T09:49:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:49:47 localhost podman[239764]: @ - - [23/Nov/2025:09:49:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:49:47 localhost podman[239764]: @ - - [23/Nov/2025:09:49:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18209 "" "Go-http-client/1.1" Nov 23 04:49:47 localhost python3[291550]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line= - ip_netmask: 172.18.0.103/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 23 04:49:47 localhost podman[291561]: Nov 23 04:49:47 localhost podman[291561]: 2025-11-23 09:49:47.353402284 +0000 UTC m=+0.083552242 container create b4ccf03dfc1428cd45819fb98fc31d2e65a2c9d7fcc343cf8e60108b0e4a0411 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_meninsky, vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_CLEAN=True, release=553, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, version=7) Nov 23 04:49:47 localhost systemd[1]: Started libpod-conmon-b4ccf03dfc1428cd45819fb98fc31d2e65a2c9d7fcc343cf8e60108b0e4a0411.scope. Nov 23 04:49:47 localhost systemd[1]: Started libcrun container. Nov 23 04:49:47 localhost podman[291561]: 2025-11-23 09:49:47.320619936 +0000 UTC m=+0.050769914 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:49:47 localhost podman[291561]: 2025-11-23 09:49:47.426938887 +0000 UTC m=+0.157088845 container init b4ccf03dfc1428cd45819fb98fc31d2e65a2c9d7fcc343cf8e60108b0e4a0411 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_meninsky, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., distribution-scope=public, release=553, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, ceph=True, com.redhat.component=rhceph-container, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhceph) Nov 23 04:49:47 localhost podman[291561]: 2025-11-23 09:49:47.439301858 +0000 UTC m=+0.169451816 container start b4ccf03dfc1428cd45819fb98fc31d2e65a2c9d7fcc343cf8e60108b0e4a0411 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_meninsky, build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_BRANCH=main, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , name=rhceph, version=7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, ceph=True, io.buildah.version=1.33.12, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 23 04:49:47 localhost podman[291561]: 2025-11-23 09:49:47.439581437 +0000 UTC m=+0.169731445 container attach b4ccf03dfc1428cd45819fb98fc31d2e65a2c9d7fcc343cf8e60108b0e4a0411 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_meninsky, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_BRANCH=main, release=553, maintainer=Guillaume Abrioux , ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.openshift.tags=rhceph ceph, name=rhceph, io.buildah.version=1.33.12) Nov 23 04:49:47 localhost youthful_meninsky[291594]: 167 167 Nov 23 04:49:47 localhost systemd[1]: libpod-b4ccf03dfc1428cd45819fb98fc31d2e65a2c9d7fcc343cf8e60108b0e4a0411.scope: Deactivated successfully. Nov 23 04:49:47 localhost podman[291561]: 2025-11-23 09:49:47.441779605 +0000 UTC m=+0.171929603 container died b4ccf03dfc1428cd45819fb98fc31d2e65a2c9d7fcc343cf8e60108b0e4a0411 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_meninsky, release=553, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, version=7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, architecture=x86_64, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:49:47 localhost systemd[1]: var-lib-containers-storage-overlay-313bcaad9a8ba4f5ee0b60f0379499ad7b86c137bf06349a965af46ccdb5a2b2-merged.mount: Deactivated successfully. Nov 23 04:49:47 localhost podman[291599]: 2025-11-23 09:49:47.537658116 +0000 UTC m=+0.084790031 container remove b4ccf03dfc1428cd45819fb98fc31d2e65a2c9d7fcc343cf8e60108b0e4a0411 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_meninsky, vcs-type=git, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.expose-services=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, GIT_CLEAN=True, name=rhceph, version=7, ceph=True, build-date=2025-09-24T08:57:55, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:49:47 localhost systemd[1]: libpod-conmon-b4ccf03dfc1428cd45819fb98fc31d2e65a2c9d7fcc343cf8e60108b0e4a0411.scope: Deactivated successfully. Nov 23 04:49:47 localhost ceph-mon[288117]: Reconfiguring osd.5 (monmap changed)... Nov 23 04:49:47 localhost ceph-mon[288117]: Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:49:47 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:47 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:47 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:49:48 localhost python3[291785]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.103/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:49:48 localhost ceph-mon[288117]: mon.np0005532584@4(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:49:48 localhost podman[291821]: Nov 23 04:49:48 localhost podman[291821]: 2025-11-23 09:49:48.326113792 +0000 UTC m=+0.073886994 container create 8fadd73045fea9506b00ada5873c47db5e87871178cfa25973f332182bbb5253 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_snyder, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, maintainer=Guillaume Abrioux , distribution-scope=public, version=7, build-date=2025-09-24T08:57:55, vcs-type=git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:49:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:49:48 localhost systemd[1]: Started libpod-conmon-8fadd73045fea9506b00ada5873c47db5e87871178cfa25973f332182bbb5253.scope. Nov 23 04:49:48 localhost systemd[1]: Started libcrun container. Nov 23 04:49:48 localhost podman[291821]: 2025-11-23 09:49:48.390666409 +0000 UTC m=+0.138439611 container init 8fadd73045fea9506b00ada5873c47db5e87871178cfa25973f332182bbb5253 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_snyder, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , release=553, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, name=rhceph, CEPH_POINT_RELEASE=, ceph=True, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7) Nov 23 04:49:48 localhost podman[291821]: 2025-11-23 09:49:48.295888073 +0000 UTC m=+0.043661305 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:49:48 localhost adoring_snyder[291853]: 167 167 Nov 23 04:49:48 localhost podman[291821]: 2025-11-23 09:49:48.407497887 +0000 UTC m=+0.155271109 container start 8fadd73045fea9506b00ada5873c47db5e87871178cfa25973f332182bbb5253 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_snyder, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=553, io.openshift.tags=rhceph ceph, vcs-type=git, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, version=7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, ceph=True, GIT_CLEAN=True) Nov 23 04:49:48 localhost systemd[1]: libpod-8fadd73045fea9506b00ada5873c47db5e87871178cfa25973f332182bbb5253.scope: Deactivated successfully. Nov 23 04:49:48 localhost podman[291821]: 2025-11-23 09:49:48.409039445 +0000 UTC m=+0.156812647 container attach 8fadd73045fea9506b00ada5873c47db5e87871178cfa25973f332182bbb5253 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_snyder, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=rhceph-container, name=rhceph, CEPH_POINT_RELEASE=, release=553, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, ceph=True, version=7, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7) Nov 23 04:49:48 localhost podman[291821]: 2025-11-23 09:49:48.41114962 +0000 UTC m=+0.158922832 container died 8fadd73045fea9506b00ada5873c47db5e87871178cfa25973f332182bbb5253 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_snyder, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, name=rhceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-type=git, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., architecture=x86_64, ceph=True, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_CLEAN=True, release=553, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux ) Nov 23 04:49:48 localhost systemd[1]: tmp-crun.dRF3dl.mount: Deactivated successfully. Nov 23 04:49:48 localhost podman[291852]: 2025-11-23 09:49:48.500284704 +0000 UTC m=+0.133070897 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7) Nov 23 04:49:48 localhost podman[291852]: 2025-11-23 09:49:48.513897092 +0000 UTC m=+0.146683315 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6) Nov 23 04:49:48 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:49:48 localhost systemd[1]: var-lib-containers-storage-overlay-75a8287a88dd48fdb82dbc90ac808331c7181a78f07bcf02def1cc7d1509700b-merged.mount: Deactivated successfully. Nov 23 04:49:48 localhost podman[291884]: 2025-11-23 09:49:48.607530884 +0000 UTC m=+0.186647246 container remove 8fadd73045fea9506b00ada5873c47db5e87871178cfa25973f332182bbb5253 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_snyder, GIT_CLEAN=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, release=553, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, version=7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, RELEASE=main) Nov 23 04:49:48 localhost systemd[1]: libpod-conmon-8fadd73045fea9506b00ada5873c47db5e87871178cfa25973f332182bbb5253.scope: Deactivated successfully. Nov 23 04:49:48 localhost ceph-mon[288117]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)... Nov 23 04:49:48 localhost ceph-mon[288117]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain Nov 23 04:49:48 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:48 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:48 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:49:48 localhost python3[292037]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.103 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 04:49:49 localhost podman[292056]: Nov 23 04:49:49 localhost podman[292056]: 2025-11-23 09:49:49.298830532 +0000 UTC m=+0.078325702 container create 921b4a10b6fb485f40dcd3284a1442a5454c18b8299d1bda2d5a48c60b351ef7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_saha, build-date=2025-09-24T08:57:55, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_BRANCH=main, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, version=7, maintainer=Guillaume Abrioux ) Nov 23 04:49:49 localhost systemd[1]: Started libpod-conmon-921b4a10b6fb485f40dcd3284a1442a5454c18b8299d1bda2d5a48c60b351ef7.scope. Nov 23 04:49:49 localhost systemd[1]: Started libcrun container. Nov 23 04:49:49 localhost podman[292056]: 2025-11-23 09:49:49.361357126 +0000 UTC m=+0.140852306 container init 921b4a10b6fb485f40dcd3284a1442a5454c18b8299d1bda2d5a48c60b351ef7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_saha, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, name=rhceph, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.openshift.expose-services=, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main) Nov 23 04:49:49 localhost podman[292056]: 2025-11-23 09:49:49.263911636 +0000 UTC m=+0.043406836 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:49:49 localhost podman[292056]: 2025-11-23 09:49:49.370637741 +0000 UTC m=+0.150132911 container start 921b4a10b6fb485f40dcd3284a1442a5454c18b8299d1bda2d5a48c60b351ef7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_saha, architecture=x86_64, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, release=553, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:49:49 localhost podman[292056]: 2025-11-23 09:49:49.371148877 +0000 UTC m=+0.150644057 container attach 921b4a10b6fb485f40dcd3284a1442a5454c18b8299d1bda2d5a48c60b351ef7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_saha, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, architecture=x86_64, ceph=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:49:49 localhost gracious_saha[292071]: 167 167 Nov 23 04:49:49 localhost systemd[1]: libpod-921b4a10b6fb485f40dcd3284a1442a5454c18b8299d1bda2d5a48c60b351ef7.scope: Deactivated successfully. Nov 23 04:49:49 localhost podman[292056]: 2025-11-23 09:49:49.37415111 +0000 UTC m=+0.153646330 container died 921b4a10b6fb485f40dcd3284a1442a5454c18b8299d1bda2d5a48c60b351ef7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_saha, com.redhat.component=rhceph-container, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, version=7, maintainer=Guillaume Abrioux , name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git) Nov 23 04:49:49 localhost podman[292076]: 2025-11-23 09:49:49.467152382 +0000 UTC m=+0.080860990 container remove 921b4a10b6fb485f40dcd3284a1442a5454c18b8299d1bda2d5a48c60b351ef7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_saha, GIT_CLEAN=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, version=7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, distribution-scope=public) Nov 23 04:49:49 localhost systemd[1]: libpod-conmon-921b4a10b6fb485f40dcd3284a1442a5454c18b8299d1bda2d5a48c60b351ef7.scope: Deactivated successfully. Nov 23 04:49:49 localhost ceph-mon[288117]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)... Nov 23 04:49:49 localhost ceph-mon[288117]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain Nov 23 04:49:49 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:49 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:49 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:49:50 localhost podman[292146]: Nov 23 04:49:50 localhost podman[292146]: 2025-11-23 09:49:50.149592216 +0000 UTC m=+0.083263183 container create fae833961326a65c0edf344abbee6575ee1de8571eb55dc50f2c9fa4b6838ee2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_wright, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., RELEASE=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, CEPH_POINT_RELEASE=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, distribution-scope=public, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, architecture=x86_64, vcs-type=git, GIT_BRANCH=main, version=7, description=Red Hat Ceph Storage 7) Nov 23 04:49:50 localhost systemd[1]: Started libpod-conmon-fae833961326a65c0edf344abbee6575ee1de8571eb55dc50f2c9fa4b6838ee2.scope. Nov 23 04:49:50 localhost systemd[1]: Started libcrun container. Nov 23 04:49:50 localhost podman[292146]: 2025-11-23 09:49:50.111023029 +0000 UTC m=+0.044694026 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:49:50 localhost podman[292146]: 2025-11-23 09:49:50.215203146 +0000 UTC m=+0.148874113 container init fae833961326a65c0edf344abbee6575ee1de8571eb55dc50f2c9fa4b6838ee2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_wright, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-09-24T08:57:55, vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, distribution-scope=public, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:49:50 localhost podman[292146]: 2025-11-23 09:49:50.225193853 +0000 UTC m=+0.158864810 container start fae833961326a65c0edf344abbee6575ee1de8571eb55dc50f2c9fa4b6838ee2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_wright, io.openshift.expose-services=, distribution-scope=public, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, version=7, release=553, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, architecture=x86_64) Nov 23 04:49:50 localhost podman[292146]: 2025-11-23 09:49:50.225396149 +0000 UTC m=+0.159067116 container attach fae833961326a65c0edf344abbee6575ee1de8571eb55dc50f2c9fa4b6838ee2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_wright, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, name=rhceph, ceph=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, version=7, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:49:50 localhost compassionate_wright[292162]: 167 167 Nov 23 04:49:50 localhost systemd[1]: libpod-fae833961326a65c0edf344abbee6575ee1de8571eb55dc50f2c9fa4b6838ee2.scope: Deactivated successfully. Nov 23 04:49:50 localhost podman[292146]: 2025-11-23 09:49:50.229505206 +0000 UTC m=+0.163176213 container died fae833961326a65c0edf344abbee6575ee1de8571eb55dc50f2c9fa4b6838ee2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_wright, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, name=rhceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:49:50 localhost podman[292167]: 2025-11-23 09:49:50.321510188 +0000 UTC m=+0.078152377 container remove fae833961326a65c0edf344abbee6575ee1de8571eb55dc50f2c9fa4b6838ee2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_wright, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, version=7, vcs-type=git, name=rhceph, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=) Nov 23 04:49:50 localhost systemd[1]: libpod-conmon-fae833961326a65c0edf344abbee6575ee1de8571eb55dc50f2c9fa4b6838ee2.scope: Deactivated successfully. Nov 23 04:49:50 localhost systemd[1]: var-lib-containers-storage-overlay-2dbe3ce05e0b1b8b59b3007cc165cf1104a116429a50ef1bc114800ed6ac8ed8-merged.mount: Deactivated successfully. Nov 23 04:49:50 localhost ceph-mon[288117]: Reconfiguring mon.np0005532584 (monmap changed)... Nov 23 04:49:50 localhost ceph-mon[288117]: Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain Nov 23 04:49:50 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:50 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:50 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:49:51 localhost ceph-mon[288117]: Reconfiguring crash.np0005532585 (monmap changed)... Nov 23 04:49:51 localhost ceph-mon[288117]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain Nov 23 04:49:51 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:51 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:51 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 23 04:49:52 localhost ceph-mon[288117]: Reconfiguring osd.0 (monmap changed)... Nov 23 04:49:52 localhost ceph-mon[288117]: Reconfiguring daemon osd.0 on np0005532585.localdomain Nov 23 04:49:52 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:52 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:52 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 23 04:49:52 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:53 localhost ceph-mon[288117]: mon.np0005532584@4(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:49:53 localhost systemd[1]: tmp-crun.G5eRsM.mount: Deactivated successfully. Nov 23 04:49:53 localhost podman[292200]: 2025-11-23 09:49:53.914444441 +0000 UTC m=+0.100050120 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Nov 23 04:49:53 localhost podman[292200]: 2025-11-23 09:49:53.9534095 +0000 UTC m=+0.139015169 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Nov 23 04:49:53 localhost ceph-mon[288117]: Reconfiguring osd.3 (monmap changed)... Nov 23 04:49:53 localhost ceph-mon[288117]: Reconfiguring daemon osd.3 on np0005532585.localdomain Nov 23 04:49:53 localhost ceph-mon[288117]: Saving service mon spec with placement label:mon Nov 23 04:49:53 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:53 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:53 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:49:53 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:49:54 localhost podman[292201]: 2025-11-23 09:49:53.954869516 +0000 UTC m=+0.140052352 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:49:54 localhost podman[292201]: 2025-11-23 09:49:54.037462017 +0000 UTC m=+0.222644853 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:49:54 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:49:54 localhost ceph-mon[288117]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)... Nov 23 04:49:54 localhost ceph-mon[288117]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain Nov 23 04:49:54 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:54 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:54 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:49:54 localhost ceph-mon[288117]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:49:55 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55cf28e18f20 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Nov 23 04:49:55 localhost ceph-mon[288117]: mon.np0005532584@4(peon) e8 removed from monmap, suicide. Nov 23 04:49:55 localhost podman[292249]: 2025-11-23 09:49:55.190019501 +0000 UTC m=+0.054898021 container died 60fd7b9d65da0c313a7bb835eee4a1d62693ac74467900ca0a7541c45edee99d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mon-np0005532584, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , name=rhceph, GIT_CLEAN=True, RELEASE=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64) Nov 23 04:49:55 localhost systemd[1]: var-lib-containers-storage-overlay-4c2c871173b10c2d0e6d34c830a38baae6078d36a488e0f62a546c298a13dd6f-merged.mount: Deactivated successfully. Nov 23 04:49:55 localhost podman[292249]: 2025-11-23 09:49:55.231164957 +0000 UTC m=+0.096043427 container remove 60fd7b9d65da0c313a7bb835eee4a1d62693ac74467900ca0a7541c45edee99d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mon-np0005532584, release=553, CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, architecture=x86_64, name=rhceph, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:49:56 localhost systemd[1]: ceph-46550e70-79cb-5f55-bf6d-1204b97e083b@mon.np0005532584.service: Deactivated successfully. Nov 23 04:49:56 localhost systemd[1]: Stopped Ceph mon.np0005532584 for 46550e70-79cb-5f55-bf6d-1204b97e083b. Nov 23 04:49:56 localhost systemd[1]: ceph-46550e70-79cb-5f55-bf6d-1204b97e083b@mon.np0005532584.service: Consumed 4.091s CPU time. Nov 23 04:49:56 localhost systemd[1]: Reloading. Nov 23 04:49:56 localhost systemd-sysv-generator[292437]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:49:56 localhost systemd-rc-local-generator[292433]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:49:56 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:49:56 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:49:56 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:49:56 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:49:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:49:56 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:49:56 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:49:56 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:49:56 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:49:59 localhost nova_compute[280939]: 2025-11-23 09:49:59.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:49:59 localhost ceph-mds[285431]: mds.beacon.mds.np0005532584.aoxjmw missed beacon ack from the monitors Nov 23 04:50:01 localhost nova_compute[280939]: 2025-11-23 09:50:01.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:50:01 localhost nova_compute[280939]: 2025-11-23 09:50:01.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:50:01 localhost nova_compute[280939]: 2025-11-23 09:50:01.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:50:02 localhost nova_compute[280939]: 2025-11-23 09:50:02.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:50:02 localhost nova_compute[280939]: 2025-11-23 09:50:02.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:50:02 localhost nova_compute[280939]: 2025-11-23 09:50:02.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:50:02 localhost nova_compute[280939]: 2025-11-23 09:50:02.168 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:50:02 localhost nova_compute[280939]: 2025-11-23 09:50:02.169 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:50:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:50:02 localhost podman[292446]: 2025-11-23 09:50:02.900797294 +0000 UTC m=+0.082411636 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2) Nov 23 04:50:02 localhost podman[292446]: 2025-11-23 09:50:02.911362989 +0000 UTC m=+0.092977321 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 23 04:50:02 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:50:03 localhost nova_compute[280939]: 2025-11-23 09:50:03.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:50:04 localhost nova_compute[280939]: 2025-11-23 09:50:04.128 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:50:04 localhost nova_compute[280939]: 2025-11-23 09:50:04.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:50:04 localhost nova_compute[280939]: 2025-11-23 09:50:04.151 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:50:04 localhost nova_compute[280939]: 2025-11-23 09:50:04.151 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:50:04 localhost nova_compute[280939]: 2025-11-23 09:50:04.152 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:50:04 localhost nova_compute[280939]: 2025-11-23 09:50:04.152 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:50:04 localhost nova_compute[280939]: 2025-11-23 09:50:04.152 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:50:04 localhost podman[292540]: Nov 23 04:50:04 localhost podman[292540]: 2025-11-23 09:50:04.489948525 +0000 UTC m=+0.084282285 container create 7884d599f1bb4a9c9b5874e67bbab029c4cd21b470fe3accbae54ef36635b19e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_poitras, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, release=553, name=rhceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, distribution-scope=public, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, architecture=x86_64, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:50:04 localhost systemd[1]: Started libpod-conmon-7884d599f1bb4a9c9b5874e67bbab029c4cd21b470fe3accbae54ef36635b19e.scope. Nov 23 04:50:04 localhost systemd[1]: Started libcrun container. Nov 23 04:50:04 localhost podman[292540]: 2025-11-23 09:50:04.556757931 +0000 UTC m=+0.151091691 container init 7884d599f1bb4a9c9b5874e67bbab029c4cd21b470fe3accbae54ef36635b19e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_poitras, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , name=rhceph, release=553, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7) Nov 23 04:50:04 localhost podman[292540]: 2025-11-23 09:50:04.457991842 +0000 UTC m=+0.052325692 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:50:04 localhost podman[292540]: 2025-11-23 09:50:04.568314677 +0000 UTC m=+0.162648457 container start 7884d599f1bb4a9c9b5874e67bbab029c4cd21b470fe3accbae54ef36635b19e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_poitras, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux , version=7, GIT_BRANCH=main, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, ceph=True, name=rhceph, GIT_CLEAN=True) Nov 23 04:50:04 localhost podman[292540]: 2025-11-23 09:50:04.568888805 +0000 UTC m=+0.163222585 container attach 7884d599f1bb4a9c9b5874e67bbab029c4cd21b470fe3accbae54ef36635b19e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_poitras, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-type=git, release=553, ceph=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, RELEASE=main, build-date=2025-09-24T08:57:55, version=7, GIT_CLEAN=True) Nov 23 04:50:04 localhost tender_poitras[292555]: 167 167 Nov 23 04:50:04 localhost podman[292540]: 2025-11-23 09:50:04.57231294 +0000 UTC m=+0.166646730 container died 7884d599f1bb4a9c9b5874e67bbab029c4cd21b470fe3accbae54ef36635b19e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_poitras, distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, version=7, name=rhceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , ceph=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:50:04 localhost systemd[1]: libpod-7884d599f1bb4a9c9b5874e67bbab029c4cd21b470fe3accbae54ef36635b19e.scope: Deactivated successfully. Nov 23 04:50:04 localhost nova_compute[280939]: 2025-11-23 09:50:04.634 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:50:04 localhost podman[292560]: 2025-11-23 09:50:04.675728793 +0000 UTC m=+0.093789218 container remove 7884d599f1bb4a9c9b5874e67bbab029c4cd21b470fe3accbae54ef36635b19e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_poitras, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., version=7, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 04:50:04 localhost systemd[1]: libpod-conmon-7884d599f1bb4a9c9b5874e67bbab029c4cd21b470fe3accbae54ef36635b19e.scope: Deactivated successfully. Nov 23 04:50:04 localhost nova_compute[280939]: 2025-11-23 09:50:04.805 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:50:04 localhost nova_compute[280939]: 2025-11-23 09:50:04.806 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12333MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:50:04 localhost nova_compute[280939]: 2025-11-23 09:50:04.806 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:50:04 localhost nova_compute[280939]: 2025-11-23 09:50:04.806 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:50:04 localhost nova_compute[280939]: 2025-11-23 09:50:04.889 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:50:04 localhost nova_compute[280939]: 2025-11-23 09:50:04.890 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:50:04 localhost nova_compute[280939]: 2025-11-23 09:50:04.918 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:50:05 localhost podman[292651]: Nov 23 04:50:05 localhost podman[292651]: 2025-11-23 09:50:05.356984741 +0000 UTC m=+0.070735128 container create c8e1ee3597c5ccdd77a6ed4ef5ff9f240c80f374e3a2641badac7c97924195ef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_clarke, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, maintainer=Guillaume Abrioux , ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_BRANCH=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553) Nov 23 04:50:05 localhost nova_compute[280939]: 2025-11-23 09:50:05.371 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:50:05 localhost nova_compute[280939]: 2025-11-23 09:50:05.377 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:50:05 localhost systemd[1]: Started libpod-conmon-c8e1ee3597c5ccdd77a6ed4ef5ff9f240c80f374e3a2641badac7c97924195ef.scope. Nov 23 04:50:05 localhost nova_compute[280939]: 2025-11-23 09:50:05.400 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:50:05 localhost nova_compute[280939]: 2025-11-23 09:50:05.401 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:50:05 localhost nova_compute[280939]: 2025-11-23 09:50:05.402 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:50:05 localhost systemd[1]: Started libcrun container. Nov 23 04:50:05 localhost podman[292651]: 2025-11-23 09:50:05.416628496 +0000 UTC m=+0.130378863 container init c8e1ee3597c5ccdd77a6ed4ef5ff9f240c80f374e3a2641badac7c97924195ef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_clarke, vcs-type=git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, version=7, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, distribution-scope=public) Nov 23 04:50:05 localhost podman[292651]: 2025-11-23 09:50:05.426094408 +0000 UTC m=+0.139844775 container start c8e1ee3597c5ccdd77a6ed4ef5ff9f240c80f374e3a2641badac7c97924195ef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_clarke, com.redhat.component=rhceph-container, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, CEPH_POINT_RELEASE=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 04:50:05 localhost podman[292651]: 2025-11-23 09:50:05.426649335 +0000 UTC m=+0.140399702 container attach c8e1ee3597c5ccdd77a6ed4ef5ff9f240c80f374e3a2641badac7c97924195ef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_clarke, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, name=rhceph, maintainer=Guillaume Abrioux , ceph=True, vendor=Red Hat, Inc., RELEASE=main, vcs-type=git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, release=553, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph) Nov 23 04:50:05 localhost loving_clarke[292668]: 167 167 Nov 23 04:50:05 localhost podman[292651]: 2025-11-23 09:50:05.328942528 +0000 UTC m=+0.042692925 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:50:05 localhost systemd[1]: libpod-c8e1ee3597c5ccdd77a6ed4ef5ff9f240c80f374e3a2641badac7c97924195ef.scope: Deactivated successfully. Nov 23 04:50:05 localhost podman[292651]: 2025-11-23 09:50:05.430330978 +0000 UTC m=+0.144081345 container died c8e1ee3597c5ccdd77a6ed4ef5ff9f240c80f374e3a2641badac7c97924195ef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_clarke, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, RELEASE=main, GIT_BRANCH=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, name=rhceph, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:50:05 localhost systemd[1]: var-lib-containers-storage-overlay-a2365820eb6ef4b43f0f3a36b2ae0aad19c64298f3098955375a96e5c90b3a4e-merged.mount: Deactivated successfully. Nov 23 04:50:05 localhost systemd[1]: var-lib-containers-storage-overlay-7347eb0360e3a4382f54885ce4a4ff3296b1fb74904b10fff964649150e34c14-merged.mount: Deactivated successfully. Nov 23 04:50:05 localhost podman[292673]: 2025-11-23 09:50:05.537382683 +0000 UTC m=+0.092507558 container remove c8e1ee3597c5ccdd77a6ed4ef5ff9f240c80f374e3a2641badac7c97924195ef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_clarke, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, vcs-type=git, name=rhceph, ceph=True, architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, distribution-scope=public, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:50:05 localhost systemd[1]: libpod-conmon-c8e1ee3597c5ccdd77a6ed4ef5ff9f240c80f374e3a2641badac7c97924195ef.scope: Deactivated successfully. Nov 23 04:50:06 localhost podman[292751]: Nov 23 04:50:06 localhost podman[292751]: 2025-11-23 09:50:06.422433953 +0000 UTC m=+0.079093755 container create f2be808d86787c7e685895b71d21ddf177d77e74b81c1796cf82cf8244209330 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_faraday, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=553, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, com.redhat.component=rhceph-container, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True) Nov 23 04:50:06 localhost systemd[1]: Started libpod-conmon-f2be808d86787c7e685895b71d21ddf177d77e74b81c1796cf82cf8244209330.scope. Nov 23 04:50:06 localhost systemd[1]: Started libcrun container. Nov 23 04:50:06 localhost podman[292751]: 2025-11-23 09:50:06.387749606 +0000 UTC m=+0.044409438 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:50:06 localhost podman[292751]: 2025-11-23 09:50:06.4931702 +0000 UTC m=+0.149830022 container init f2be808d86787c7e685895b71d21ddf177d77e74b81c1796cf82cf8244209330 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_faraday, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, GIT_CLEAN=True, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, release=553, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.expose-services=, name=rhceph) Nov 23 04:50:06 localhost podman[292751]: 2025-11-23 09:50:06.504289332 +0000 UTC m=+0.160949134 container start f2be808d86787c7e685895b71d21ddf177d77e74b81c1796cf82cf8244209330 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_faraday, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.expose-services=) Nov 23 04:50:06 localhost podman[292751]: 2025-11-23 09:50:06.505059126 +0000 UTC m=+0.161718968 container attach f2be808d86787c7e685895b71d21ddf177d77e74b81c1796cf82cf8244209330 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_faraday, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, version=7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git) Nov 23 04:50:06 localhost objective_faraday[292766]: 167 167 Nov 23 04:50:06 localhost systemd[1]: libpod-f2be808d86787c7e685895b71d21ddf177d77e74b81c1796cf82cf8244209330.scope: Deactivated successfully. Nov 23 04:50:06 localhost podman[292751]: 2025-11-23 09:50:06.508477151 +0000 UTC m=+0.165136983 container died f2be808d86787c7e685895b71d21ddf177d77e74b81c1796cf82cf8244209330 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_faraday, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, RELEASE=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7) Nov 23 04:50:06 localhost systemd[1]: var-lib-containers-storage-overlay-b38e00d67f01398ba4e9d8d0f98ec8ed136711f87bdafab99df6fd84619e81f9-merged.mount: Deactivated successfully. Nov 23 04:50:06 localhost podman[292771]: 2025-11-23 09:50:06.604101004 +0000 UTC m=+0.082776519 container remove f2be808d86787c7e685895b71d21ddf177d77e74b81c1796cf82cf8244209330 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_faraday, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, distribution-scope=public, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.openshift.expose-services=, ceph=True, release=553, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:50:06 localhost systemd[1]: libpod-conmon-f2be808d86787c7e685895b71d21ddf177d77e74b81c1796cf82cf8244209330.scope: Deactivated successfully. Nov 23 04:50:06 localhost openstack_network_exporter[241732]: ERROR 09:50:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:50:06 localhost openstack_network_exporter[241732]: ERROR 09:50:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:50:06 localhost openstack_network_exporter[241732]: Nov 23 04:50:06 localhost openstack_network_exporter[241732]: ERROR 09:50:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:50:06 localhost openstack_network_exporter[241732]: ERROR 09:50:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:50:06 localhost openstack_network_exporter[241732]: ERROR 09:50:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:50:06 localhost openstack_network_exporter[241732]: Nov 23 04:50:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:50:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:50:06 localhost podman[292795]: 2025-11-23 09:50:06.892384597 +0000 UTC m=+0.075778063 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:50:06 localhost podman[292795]: 2025-11-23 09:50:06.930118738 +0000 UTC m=+0.113512254 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:50:06 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:50:06 localhost podman[292794]: 2025-11-23 09:50:06.952574519 +0000 UTC m=+0.135922944 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true) Nov 23 04:50:06 localhost podman[292794]: 2025-11-23 09:50:06.989490406 +0000 UTC m=+0.172838821 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd) Nov 23 04:50:07 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:50:07 localhost nova_compute[280939]: 2025-11-23 09:50:07.403 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:50:07 localhost podman[292890]: Nov 23 04:50:07 localhost podman[292890]: 2025-11-23 09:50:07.441041724 +0000 UTC m=+0.077710443 container create 8f747afd8b01c30ac4d906342bb70a2b83b67b671e8f2e6bf3859b0cb22ea616 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_murdock, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, build-date=2025-09-24T08:57:55, distribution-scope=public, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, version=7, release=553, vendor=Red Hat, Inc., name=rhceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:50:07 localhost systemd[1]: Started libpod-conmon-8f747afd8b01c30ac4d906342bb70a2b83b67b671e8f2e6bf3859b0cb22ea616.scope. Nov 23 04:50:07 localhost systemd[1]: Started libcrun container. Nov 23 04:50:07 localhost podman[292890]: 2025-11-23 09:50:07.407725549 +0000 UTC m=+0.044394298 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:50:07 localhost podman[292890]: 2025-11-23 09:50:07.512632147 +0000 UTC m=+0.149300866 container init 8f747afd8b01c30ac4d906342bb70a2b83b67b671e8f2e6bf3859b0cb22ea616 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_murdock, io.openshift.tags=rhceph ceph, version=7, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, build-date=2025-09-24T08:57:55, RELEASE=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_CLEAN=True, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=) Nov 23 04:50:07 localhost bold_murdock[292905]: 167 167 Nov 23 04:50:07 localhost podman[292890]: 2025-11-23 09:50:07.524495123 +0000 UTC m=+0.161163842 container start 8f747afd8b01c30ac4d906342bb70a2b83b67b671e8f2e6bf3859b0cb22ea616 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_murdock, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, version=7, release=553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.expose-services=) Nov 23 04:50:07 localhost podman[292890]: 2025-11-23 09:50:07.52702311 +0000 UTC m=+0.163691879 container attach 8f747afd8b01c30ac4d906342bb70a2b83b67b671e8f2e6bf3859b0cb22ea616 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_murdock, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, name=rhceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, RELEASE=main, release=553, distribution-scope=public) Nov 23 04:50:07 localhost systemd[1]: libpod-8f747afd8b01c30ac4d906342bb70a2b83b67b671e8f2e6bf3859b0cb22ea616.scope: Deactivated successfully. Nov 23 04:50:07 localhost podman[292890]: 2025-11-23 09:50:07.531168578 +0000 UTC m=+0.167837347 container died 8f747afd8b01c30ac4d906342bb70a2b83b67b671e8f2e6bf3859b0cb22ea616 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_murdock, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, release=553, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vcs-type=git, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-09-24T08:57:55, ceph=True, CEPH_POINT_RELEASE=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main) Nov 23 04:50:07 localhost systemd[1]: var-lib-containers-storage-overlay-ef79132c97b83a5391f4720d31a17e60f4e078c908e59e38028d494b4482585e-merged.mount: Deactivated successfully. Nov 23 04:50:07 localhost podman[292911]: 2025-11-23 09:50:07.625668397 +0000 UTC m=+0.087851586 container remove 8f747afd8b01c30ac4d906342bb70a2b83b67b671e8f2e6bf3859b0cb22ea616 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_murdock, name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, release=553, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, RELEASE=main) Nov 23 04:50:07 localhost systemd[1]: libpod-conmon-8f747afd8b01c30ac4d906342bb70a2b83b67b671e8f2e6bf3859b0cb22ea616.scope: Deactivated successfully. Nov 23 04:50:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:50:08 localhost podman[292987]: Nov 23 04:50:08 localhost podman[292987]: 2025-11-23 09:50:08.368741947 +0000 UTC m=+0.074372461 container create 0f0298c5125ef2c4a7813cb5cc112a260b5b34a8c52b5e034192faf3c0a1c59c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_bose, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, ceph=True, release=553, io.openshift.expose-services=, GIT_BRANCH=main, GIT_CLEAN=True) Nov 23 04:50:08 localhost systemd[1]: Started libpod-conmon-0f0298c5125ef2c4a7813cb5cc112a260b5b34a8c52b5e034192faf3c0a1c59c.scope. Nov 23 04:50:08 localhost systemd[1]: Started libcrun container. Nov 23 04:50:08 localhost podman[292975]: 2025-11-23 09:50:08.373145033 +0000 UTC m=+0.097119671 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:50:08 localhost podman[292987]: 2025-11-23 09:50:08.329092747 +0000 UTC m=+0.034723321 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:50:08 localhost podman[292987]: 2025-11-23 09:50:08.430431045 +0000 UTC m=+0.136061579 container init 0f0298c5125ef2c4a7813cb5cc112a260b5b34a8c52b5e034192faf3c0a1c59c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_bose, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.buildah.version=1.33.12, RELEASE=main, vendor=Red Hat, Inc., ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, name=rhceph, io.openshift.expose-services=, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux ) Nov 23 04:50:08 localhost podman[292987]: 2025-11-23 09:50:08.438595717 +0000 UTC m=+0.144226231 container start 0f0298c5125ef2c4a7813cb5cc112a260b5b34a8c52b5e034192faf3c0a1c59c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_bose, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-type=git, release=553, io.openshift.expose-services=) Nov 23 04:50:08 localhost podman[292987]: 2025-11-23 09:50:08.439050441 +0000 UTC m=+0.144680985 container attach 0f0298c5125ef2c4a7813cb5cc112a260b5b34a8c52b5e034192faf3c0a1c59c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_bose, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, release=553, name=rhceph, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-type=git, architecture=x86_64, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:50:08 localhost jolly_bose[293049]: 167 167 Nov 23 04:50:08 localhost systemd[1]: libpod-0f0298c5125ef2c4a7813cb5cc112a260b5b34a8c52b5e034192faf3c0a1c59c.scope: Deactivated successfully. Nov 23 04:50:08 localhost podman[292987]: 2025-11-23 09:50:08.441377632 +0000 UTC m=+0.147008166 container died 0f0298c5125ef2c4a7813cb5cc112a260b5b34a8c52b5e034192faf3c0a1c59c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_bose, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, name=rhceph, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., ceph=True, vcs-type=git, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:50:08 localhost podman[292975]: 2025-11-23 09:50:08.457577781 +0000 UTC m=+0.181552399 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:50:08 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:50:08 localhost systemd[1]: var-lib-containers-storage-overlay-0ea74ed053ebc9b81dbda1e3b800b91f703d8492dd072f3abc4a2e11fd29d78c-merged.mount: Deactivated successfully. Nov 23 04:50:08 localhost podman[293055]: 2025-11-23 09:50:08.558154507 +0000 UTC m=+0.109840883 container remove 0f0298c5125ef2c4a7813cb5cc112a260b5b34a8c52b5e034192faf3c0a1c59c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_bose, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, name=rhceph, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , version=7, vendor=Red Hat, Inc., release=553, RELEASE=main, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, distribution-scope=public, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:50:08 localhost systemd[1]: libpod-conmon-0f0298c5125ef2c4a7813cb5cc112a260b5b34a8c52b5e034192faf3c0a1c59c.scope: Deactivated successfully. Nov 23 04:50:09 localhost podman[293111]: Nov 23 04:50:09 localhost podman[293111]: 2025-11-23 09:50:09.058416373 +0000 UTC m=+0.086917246 container create 449cc771bae3aa9e17c58b3bdb22ab472efcbc45b36ea82da9fdf207be62a708 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_kowalevski, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc.) Nov 23 04:50:09 localhost systemd[1]: Started libpod-conmon-449cc771bae3aa9e17c58b3bdb22ab472efcbc45b36ea82da9fdf207be62a708.scope. Nov 23 04:50:09 localhost systemd[1]: Started libcrun container. Nov 23 04:50:09 localhost podman[293111]: 2025-11-23 09:50:09.026038847 +0000 UTC m=+0.054539720 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:50:09 localhost podman[293111]: 2025-11-23 09:50:09.132662259 +0000 UTC m=+0.161163132 container init 449cc771bae3aa9e17c58b3bdb22ab472efcbc45b36ea82da9fdf207be62a708 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_kowalevski, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=rhceph-container, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, distribution-scope=public, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=553, io.openshift.expose-services=, GIT_BRANCH=main) Nov 23 04:50:09 localhost podman[293111]: 2025-11-23 09:50:09.141844781 +0000 UTC m=+0.170345674 container start 449cc771bae3aa9e17c58b3bdb22ab472efcbc45b36ea82da9fdf207be62a708 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_kowalevski, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.openshift.tags=rhceph ceph, release=553, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_CLEAN=True, version=7, io.buildah.version=1.33.12, GIT_BRANCH=main, ceph=True, vcs-type=git, architecture=x86_64, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 04:50:09 localhost podman[293111]: 2025-11-23 09:50:09.142327046 +0000 UTC m=+0.170827939 container attach 449cc771bae3aa9e17c58b3bdb22ab472efcbc45b36ea82da9fdf207be62a708 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_kowalevski, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, ceph=True, io.openshift.expose-services=, name=rhceph, description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, RELEASE=main, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main) Nov 23 04:50:09 localhost angry_kowalevski[293127]: 167 167 Nov 23 04:50:09 localhost systemd[1]: libpod-449cc771bae3aa9e17c58b3bdb22ab472efcbc45b36ea82da9fdf207be62a708.scope: Deactivated successfully. Nov 23 04:50:09 localhost podman[293111]: 2025-11-23 09:50:09.145844314 +0000 UTC m=+0.174345217 container died 449cc771bae3aa9e17c58b3bdb22ab472efcbc45b36ea82da9fdf207be62a708 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_kowalevski, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, build-date=2025-09-24T08:57:55, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_CLEAN=True, ceph=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, name=rhceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12) Nov 23 04:50:09 localhost podman[293132]: 2025-11-23 09:50:09.237146625 +0000 UTC m=+0.081863101 container remove 449cc771bae3aa9e17c58b3bdb22ab472efcbc45b36ea82da9fdf207be62a708 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_kowalevski, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, name=rhceph, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:50:09 localhost systemd[1]: libpod-conmon-449cc771bae3aa9e17c58b3bdb22ab472efcbc45b36ea82da9fdf207be62a708.scope: Deactivated successfully. Nov 23 04:50:09 localhost podman[293148]: Nov 23 04:50:09 localhost podman[293148]: 2025-11-23 09:50:09.350162873 +0000 UTC m=+0.079861339 container create 319a0c1841ef2895544ba781ad926d1e76374dc884bfdde0f7f2a3154884f13c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_greider, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, release=553, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, GIT_BRANCH=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vendor=Red Hat, Inc.) Nov 23 04:50:09 localhost systemd[1]: Started libpod-conmon-319a0c1841ef2895544ba781ad926d1e76374dc884bfdde0f7f2a3154884f13c.scope. Nov 23 04:50:09 localhost systemd[1]: Started libcrun container. Nov 23 04:50:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7051359aa850e9adeb17eeb9352970267c8842580b35eefb1a1f080ef10a72/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Nov 23 04:50:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7051359aa850e9adeb17eeb9352970267c8842580b35eefb1a1f080ef10a72/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Nov 23 04:50:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7051359aa850e9adeb17eeb9352970267c8842580b35eefb1a1f080ef10a72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 04:50:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7051359aa850e9adeb17eeb9352970267c8842580b35eefb1a1f080ef10a72/merged/var/lib/ceph/mon/ceph-np0005532584 supports timestamps until 2038 (0x7fffffff) Nov 23 04:50:09 localhost podman[293148]: 2025-11-23 09:50:09.315477336 +0000 UTC m=+0.045175842 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:50:09 localhost podman[293148]: 2025-11-23 09:50:09.419078554 +0000 UTC m=+0.148777030 container init 319a0c1841ef2895544ba781ad926d1e76374dc884bfdde0f7f2a3154884f13c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_greider, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, release=553, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-09-24T08:57:55) Nov 23 04:50:09 localhost podman[293148]: 2025-11-23 09:50:09.42869202 +0000 UTC m=+0.158390486 container start 319a0c1841ef2895544ba781ad926d1e76374dc884bfdde0f7f2a3154884f13c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_greider, vcs-type=git, distribution-scope=public, RELEASE=main, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, release=553, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:50:09 localhost podman[293148]: 2025-11-23 09:50:09.428925597 +0000 UTC m=+0.158624083 container attach 319a0c1841ef2895544ba781ad926d1e76374dc884bfdde0f7f2a3154884f13c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_greider, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, distribution-scope=public, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12) Nov 23 04:50:09 localhost systemd[1]: var-lib-containers-storage-overlay-568a56421261312ccf3379b36aa18374844d5747355795a584b024416c82be8b-merged.mount: Deactivated successfully. Nov 23 04:50:09 localhost systemd[1]: libpod-319a0c1841ef2895544ba781ad926d1e76374dc884bfdde0f7f2a3154884f13c.scope: Deactivated successfully. Nov 23 04:50:09 localhost podman[293148]: 2025-11-23 09:50:09.529222544 +0000 UTC m=+0.258921030 container died 319a0c1841ef2895544ba781ad926d1e76374dc884bfdde0f7f2a3154884f13c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_greider, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, name=rhceph, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, release=553, distribution-scope=public, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_BRANCH=main) Nov 23 04:50:09 localhost systemd[1]: var-lib-containers-storage-overlay-1d7051359aa850e9adeb17eeb9352970267c8842580b35eefb1a1f080ef10a72-merged.mount: Deactivated successfully. Nov 23 04:50:09 localhost podman[293189]: 2025-11-23 09:50:09.637094945 +0000 UTC m=+0.096951866 container remove 319a0c1841ef2895544ba781ad926d1e76374dc884bfdde0f7f2a3154884f13c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_greider, ceph=True, name=rhceph, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , GIT_CLEAN=True, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git) Nov 23 04:50:09 localhost systemd[1]: libpod-conmon-319a0c1841ef2895544ba781ad926d1e76374dc884bfdde0f7f2a3154884f13c.scope: Deactivated successfully. Nov 23 04:50:09 localhost systemd[1]: Reloading. Nov 23 04:50:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:50:09.731 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:50:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:50:09.733 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:50:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:50:09.733 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:50:09 localhost systemd-rc-local-generator[293224]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:50:09 localhost systemd-sysv-generator[293231]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:50:09 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:50:09 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:50:09 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:50:09 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:50:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:50:09 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:50:09 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:50:09 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:50:09 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:50:10 localhost systemd[1]: Reloading. Nov 23 04:50:10 localhost systemd-rc-local-generator[293270]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 23 04:50:10 localhost systemd-sysv-generator[293275]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 23 04:50:10 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:50:10 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 23 04:50:10 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:50:10 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:50:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 23 04:50:10 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 23 04:50:10 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:50:10 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:50:10 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 23 04:50:10 localhost systemd[1]: Starting Ceph mon.np0005532584 for 46550e70-79cb-5f55-bf6d-1204b97e083b... Nov 23 04:50:10 localhost podman[293335]: Nov 23 04:50:11 localhost podman[293335]: 2025-11-23 09:50:10.772136689 +0000 UTC m=+0.050286259 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:50:11 localhost podman[293335]: 2025-11-23 09:50:11.934542406 +0000 UTC m=+1.212691916 container create 5281414736f4165d40c1ac16d5643e3ad411bc7bfa36d1d678d1e5a1d1ba8aa8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mon-np0005532584, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vcs-type=git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, distribution-scope=public, GIT_CLEAN=True) Nov 23 04:50:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c06fbd8b097e71d73fc501e276bea8b9f170aba5c0ab672cfbeff0f7ee32b80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 23 04:50:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c06fbd8b097e71d73fc501e276bea8b9f170aba5c0ab672cfbeff0f7ee32b80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 23 04:50:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c06fbd8b097e71d73fc501e276bea8b9f170aba5c0ab672cfbeff0f7ee32b80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 23 04:50:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c06fbd8b097e71d73fc501e276bea8b9f170aba5c0ab672cfbeff0f7ee32b80/merged/var/lib/ceph/mon/ceph-np0005532584 supports timestamps until 2038 (0x7fffffff) Nov 23 04:50:11 localhost podman[293335]: 2025-11-23 09:50:11.997184594 +0000 UTC m=+1.275334104 container init 5281414736f4165d40c1ac16d5643e3ad411bc7bfa36d1d678d1e5a1d1ba8aa8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mon-np0005532584, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, ceph=True, release=553, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:50:12 localhost podman[293335]: 2025-11-23 09:50:12.003534619 +0000 UTC m=+1.281684129 container start 5281414736f4165d40c1ac16d5643e3ad411bc7bfa36d1d678d1e5a1d1ba8aa8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mon-np0005532584, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, vcs-type=git, release=553, description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, name=rhceph, CEPH_POINT_RELEASE=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:50:12 localhost bash[293335]: 5281414736f4165d40c1ac16d5643e3ad411bc7bfa36d1d678d1e5a1d1ba8aa8 Nov 23 04:50:12 localhost systemd[1]: Started Ceph mon.np0005532584 for 46550e70-79cb-5f55-bf6d-1204b97e083b. Nov 23 04:50:12 localhost ceph-mon[293353]: set uid:gid to 167:167 (ceph:ceph) Nov 23 04:50:12 localhost ceph-mon[293353]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Nov 23 04:50:12 localhost ceph-mon[293353]: pidfile_write: ignore empty --pid-file Nov 23 04:50:12 localhost ceph-mon[293353]: load: jerasure load: lrc Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: RocksDB version: 7.9.2 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Git sha 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Compile date 2025-09-23 00:00:00 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: DB SUMMARY Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: DB Session ID: G7XPCJTAARWJ01GM2KVT Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: CURRENT file: CURRENT Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: IDENTITY file: IDENTITY Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005532584/store.db dir, Total Num: 0, files: Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005532584/store.db: 000004.log size: 886 ; Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.error_if_exists: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.create_if_missing: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.paranoid_checks: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.flush_verify_memtable_count: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.env: 0x5560524a49e0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.fs: PosixFileSystem Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.info_log: 0x556054b3ed20 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_file_opening_threads: 16 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.statistics: (nil) Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.use_fsync: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_log_file_size: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_manifest_file_size: 1073741824 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.log_file_time_to_roll: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.keep_log_file_num: 1000 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.recycle_log_file_num: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.allow_fallocate: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.allow_mmap_reads: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.allow_mmap_writes: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.use_direct_reads: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.create_missing_column_families: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.db_log_dir: Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.wal_dir: Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.table_cache_numshardbits: 6 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.WAL_ttl_seconds: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.WAL_size_limit_MB: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.manifest_preallocation_size: 4194304 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.is_fd_close_on_exec: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.advise_random_on_open: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.db_write_buffer_size: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.write_buffer_manager: 0x556054b4f540 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.access_hint_on_compaction_start: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.random_access_max_buffer_size: 1048576 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.use_adaptive_mutex: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.rate_limiter: (nil) Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.wal_recovery_mode: 2 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.enable_thread_tracking: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.enable_pipelined_write: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.unordered_write: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.allow_concurrent_memtable_write: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.write_thread_max_yield_usec: 100 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.write_thread_slow_yield_usec: 3 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.row_cache: None Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.wal_filter: None Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.avoid_flush_during_recovery: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.allow_ingest_behind: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.two_write_queues: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.manual_wal_flush: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.wal_compression: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.atomic_flush: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.persist_stats_to_disk: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.write_dbid_to_manifest: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.log_readahead_size: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.file_checksum_gen_factory: Unknown Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.best_efforts_recovery: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.allow_data_in_errors: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.db_host_id: __hostname__ Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.enforce_single_del_contracts: true Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_background_jobs: 2 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_background_compactions: -1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_subcompactions: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.avoid_flush_during_shutdown: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.delayed_write_rate : 16777216 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_total_wal_size: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.stats_dump_period_sec: 600 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.stats_persist_period_sec: 600 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.stats_history_buffer_size: 1048576 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_open_files: -1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.bytes_per_sync: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.wal_bytes_per_sync: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.strict_bytes_per_sync: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compaction_readahead_size: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_background_flushes: -1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Compression algorithms supported: Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: #011kZSTD supported: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: #011kXpressCompression supported: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: #011kBZip2Compression supported: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: #011kLZ4Compression supported: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: #011kZlibCompression supported: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: #011kLZ4HCCompression supported: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: #011kSnappyCompression supported: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Fast CRC32 supported: Supported on x86 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: DMutex implementation: pthread_mutex_t Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005532584/store.db/MANIFEST-000005 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.merge_operator: Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compaction_filter: None Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compaction_filter_factory: None Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.sst_partitioner_factory: None Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.memtable_factory: SkipListFactory Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.table_factory: BlockBasedTable Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556054b3e980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x556054b3b350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.write_buffer_size: 33554432 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_write_buffer_number: 2 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compression: NoCompression Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.bottommost_compression: Disabled Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.prefix_extractor: nullptr Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.num_levels: 7 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compression_opts.window_bits: -14 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compression_opts.level: 32767 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compression_opts.strategy: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compression_opts.enabled: false Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.target_file_size_base: 67108864 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.target_file_size_multiplier: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_bytes_for_level_base: 268435456 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.arena_block_size: 1048576 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.disable_auto_compactions: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.table_properties_collectors: Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.inplace_update_support: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.memtable_huge_page_size: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.bloom_locality: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.max_successive_merges: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.paranoid_file_checks: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.force_consistency_checks: 1 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.report_bg_io_stats: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.ttl: 2592000 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.enable_blob_files: false Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.min_blob_size: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.blob_file_size: 268435456 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.blob_compression_type: NoCompression Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.enable_blob_garbage_collection: false Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.blob_file_starting_level: 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005532584/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 9a7d578c-21aa-41c0-97ef-37d912c42473 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891412063588, "job": 1, "event": "recovery_started", "wal_files": [4]} Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891412066687, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 2012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 898, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 776, "raw_average_value_size": 155, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891412066962, "job": 1, "event": "recovery_finished"} Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556054b62e00 Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: DB pointer 0x556054c58000 Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532584 does not exist in monmap, will attempt to join an existing cluster Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 04:50:12 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.96 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Sum 1/0 1.96 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556054b3b350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Nov 23 04:50:12 localhost ceph-mon[293353]: using public_addr v2:172.18.0.103:0/0 -> [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] Nov 23 04:50:12 localhost ceph-mon[293353]: starting mon.np0005532584 rank -1 at public addrs [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] at bind addrs [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005532584 fsid 46550e70-79cb-5f55-bf6d-1204b97e083b Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532584@-1(???) e0 preinit fsid 46550e70-79cb-5f55-bf6d-1204b97e083b Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532584@-1(synchronizing) e8 sync_obtain_latest_monmap Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532584@-1(synchronizing) e8 sync_obtain_latest_monmap obtained monmap e8 Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532584@-1(synchronizing).mds e16 new map Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532584@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T08:00:26.486221+0000#012modified#0112025-11-23T09:47:19.846415+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01179#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26392}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26392 members: 26392#012[mds.mds.np0005532586.mfohsb{0:26392} state up:active seq 12 addr [v2:172.18.0.108:6808/2718449296,v1:172.18.0.108:6809/2718449296] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005532585.jcltnl{-1:17133} state up:standby seq 1 addr [v2:172.18.0.107:6808/563301557,v1:172.18.0.107:6809/563301557] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005532584.aoxjmw{-1:17139} state up:standby seq 1 addr [v2:172.18.0.106:6808/2261302276,v1:172.18.0.106:6809/2261302276] compat {c=[1],r=[1],i=[17ff]}] Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532584@-1(synchronizing).osd e82 crush map has features 3314933000852226048, adjusting msgr requires Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532584@-1(synchronizing).osd e82 crush map has features 288514051259236352, adjusting msgr requires Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532584@-1(synchronizing).osd e82 crush map has features 288514051259236352, adjusting msgr requires Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532584@-1(synchronizing).osd e82 crush map has features 288514051259236352, adjusting msgr requires Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring mon.np0005532584 (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring crash.np0005532585 (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring osd.0 (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon osd.0 on np0005532585.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring osd.3 (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon osd.3 on np0005532585.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: Saving service mon spec with placement label:mon Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: Remove daemons mon.np0005532584 Nov 23 04:50:12 localhost ceph-mon[293353]: Safe to remove mon.np0005532584: new quorum should be ['np0005532583', 'np0005532582', 'np0005532586', 'np0005532585'] (from ['np0005532583', 'np0005532582', 'np0005532586', 'np0005532585']) Nov 23 04:50:12 localhost ceph-mon[293353]: Removing monitor np0005532584 from monmap... Nov 23 04:50:12 localhost ceph-mon[293353]: Removing daemon mon.np0005532584 from np0005532584.localdomain -- ports [] Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532583 calling monitor election Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532586 calling monitor election Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532582 calling monitor election Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532585 calling monitor election Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring crash.np0005532582 (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532582.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532583 is new leader, mons np0005532583,np0005532586,np0005532585 in quorum (ranks 0,2,3) Nov 23 04:50:12 localhost ceph-mon[293353]: overall HEALTH_OK Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532583 calling monitor election Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532583 is new leader, mons np0005532583,np0005532582,np0005532586,np0005532585 in quorum (ranks 0,1,2,3) Nov 23 04:50:12 localhost ceph-mon[293353]: overall HEALTH_OK Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532582 on np0005532582.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532582.gilwrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532582.gilwrz (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532582.gilwrz on np0005532582.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532583.orhywt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532583.orhywt (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532583.orhywt on np0005532583.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring crash.np0005532583 (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring crash.np0005532584 (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring osd.2 (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring osd.5 (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: Deploying daemon mon.np0005532584 on np0005532584.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring crash.np0005532585 (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring osd.0 (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon osd.0 on np0005532585.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring osd.3 (monmap changed)... Nov 23 04:50:12 localhost ceph-mon[293353]: Reconfiguring daemon osd.3 on np0005532585.localdomain Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:12 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:50:12 localhost ceph-mon[293353]: mon.np0005532584@-1(synchronizing).paxosservice(auth 1..36) refresh upgraded, format 0 -> 3 Nov 23 04:50:12 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55cf28e191e0 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Nov 23 04:50:14 localhost ceph-mon[293353]: mon.np0005532584@-1(probing) e9 my rank is now 4 (was -1) Nov 23 04:50:14 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : mon.np0005532584 calling monitor election Nov 23 04:50:14 localhost ceph-mon[293353]: paxos.4).electionLogic(0) init, first boot, initializing epoch at 1 Nov 23 04:50:14 localhost ceph-mon[293353]: mon.np0005532584@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:50:17 localhost podman[239764]: time="2025-11-23T09:50:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:50:17 localhost podman[239764]: @ - - [23/Nov/2025:09:50:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:50:17 localhost podman[239764]: @ - - [23/Nov/2025:09:50:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18212 "" "Go-http-client/1.1" Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532584@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532584@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532584@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532584@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532584@4(peon) e9 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532584@4(peon) e9 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532584@4(peon) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:50:17 localhost ceph-mon[293353]: mgrc update_daemon_metadata mon.np0005532584 metadata {addrs=[v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005532584.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.6 (Plow),distro_version=9.6,hostname=np0005532584.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Nov 23 04:50:17 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)... Nov 23 04:50:17 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532583 calling monitor election Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532582 calling monitor election Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532586 calling monitor election Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532585 calling monitor election Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532584 calling monitor election Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532583 is new leader, mons np0005532583,np0005532582,np0005532586 in quorum (ranks 0,1,2) Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532586 calling monitor election Nov 23 04:50:17 localhost ceph-mon[293353]: overall HEALTH_OK Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532583 calling monitor election Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532582 calling monitor election Nov 23 04:50:17 localhost ceph-mon[293353]: mon.np0005532583 is new leader, mons np0005532583,np0005532582,np0005532586,np0005532585,np0005532584 in quorum (ranks 0,1,2,3,4) Nov 23 04:50:17 localhost ceph-mon[293353]: overall HEALTH_OK Nov 23 04:50:17 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:17 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:17 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:50:18 localhost ceph-mon[293353]: Reconfiguring crash.np0005532586 (monmap changed)... Nov 23 04:50:18 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain Nov 23 04:50:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:50:18 localhost systemd[1]: tmp-crun.MOytlu.mount: Deactivated successfully. Nov 23 04:50:18 localhost podman[293392]: 2025-11-23 09:50:18.9038771 +0000 UTC m=+0.089737443 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vendor=Red Hat, Inc.) Nov 23 04:50:18 localhost podman[293392]: 2025-11-23 09:50:18.921690028 +0000 UTC m=+0.107550381 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, version=9.6, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers) Nov 23 04:50:18 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:50:19 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:19 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:19 localhost ceph-mon[293353]: Reconfiguring osd.1 (monmap changed)... Nov 23 04:50:19 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 23 04:50:19 localhost ceph-mon[293353]: Reconfiguring daemon osd.1 on np0005532586.localdomain Nov 23 04:50:19 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:19 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:20 localhost ceph-mon[293353]: Reconfiguring osd.4 (monmap changed)... Nov 23 04:50:20 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 23 04:50:20 localhost ceph-mon[293353]: Reconfiguring daemon osd.4 on np0005532586.localdomain Nov 23 04:50:21 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:21 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:21 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)... Nov 23 04:50:21 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:50:21 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain Nov 23 04:50:21 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:21 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:21 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:50:22 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532586.thmvqb (monmap changed)... Nov 23 04:50:22 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain Nov 23 04:50:22 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:22 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:23 localhost systemd[1]: tmp-crun.S6qanY.mount: Deactivated successfully. Nov 23 04:50:23 localhost podman[293520]: 2025-11-23 09:50:23.654504995 +0000 UTC m=+0.102251518 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_BRANCH=main, com.redhat.component=rhceph-container, RELEASE=main, distribution-scope=public, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, ceph=True, name=rhceph, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55) Nov 23 04:50:23 localhost podman[293520]: 2025-11-23 09:50:23.78103121 +0000 UTC m=+0.228777693 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, version=7, ceph=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, distribution-scope=public, maintainer=Guillaume Abrioux , release=553, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-type=git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:50:24 localhost podman[293586]: 2025-11-23 09:50:24.158049123 +0000 UTC m=+0.092226410 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Nov 23 04:50:24 localhost podman[293586]: 2025-11-23 09:50:24.168149884 +0000 UTC m=+0.102327201 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 04:50:24 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:50:24 localhost podman[293587]: 2025-11-23 09:50:24.205463052 +0000 UTC m=+0.136190572 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:50:24 localhost podman[293587]: 2025-11-23 09:50:24.214036616 +0000 UTC m=+0.144764126 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:50:24 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:50:24 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:24 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:25 localhost ceph-mon[293353]: Reconfig service osd.default_drive_group Nov 23 04:50:25 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:25 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:25 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:25 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:25 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:25 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:25 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:25 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:25 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:25 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:25 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:25 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:25 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:50:26 localhost ceph-mon[293353]: mon.np0005532584@4(peon).osd e82 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Nov 23 04:50:26 localhost ceph-mon[293353]: mon.np0005532584@4(peon).osd e82 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Nov 23 04:50:26 localhost ceph-mon[293353]: mon.np0005532584@4(peon).osd e83 e83: 6 total, 6 up, 6 in Nov 23 04:50:26 localhost systemd[1]: session-64.scope: Deactivated successfully. Nov 23 04:50:26 localhost systemd[1]: session-64.scope: Consumed 25.813s CPU time. Nov 23 04:50:26 localhost systemd-logind[760]: Session 64 logged out. Waiting for processes to exit. Nov 23 04:50:26 localhost systemd-logind[760]: Removed session 64. Nov 23 04:50:26 localhost ceph-mon[293353]: Updating np0005532582.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:26 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:26 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:26 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:26 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:26 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:26 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:26 localhost ceph-mon[293353]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 23 04:50:26 localhost ceph-mon[293353]: from='client.? 172.18.0.200:0/3357125401' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 23 04:50:26 localhost ceph-mon[293353]: Activating manager daemon np0005532582.gilwrz Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:26 localhost ceph-mon[293353]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' Nov 23 04:50:26 localhost ceph-mon[293353]: Manager daemon np0005532582.gilwrz is now available Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532581.localdomain.devices.0"} : dispatch Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532581.localdomain.devices.0"} : dispatch Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532581.localdomain.devices.0"}]': finished Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532581.localdomain.devices.0"} : dispatch Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532581.localdomain.devices.0"} : dispatch Nov 23 04:50:26 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532581.localdomain.devices.0"}]': finished Nov 23 04:50:27 localhost ceph-mon[293353]: mon.np0005532584@4(peon).osd e83 _set_new_cache_sizes cache_size:1019475335 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:50:27 localhost sshd[294066]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:50:27 localhost systemd-logind[760]: New session 67 of user ceph-admin. Nov 23 04:50:27 localhost systemd[1]: Started Session 67 of User ceph-admin. Nov 23 04:50:27 localhost ceph-mon[293353]: removing stray HostCache host record np0005532581.localdomain.devices.0 Nov 23 04:50:27 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532582.gilwrz/mirror_snapshot_schedule"} : dispatch Nov 23 04:50:27 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532582.gilwrz/mirror_snapshot_schedule"} : dispatch Nov 23 04:50:27 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532582.gilwrz/trash_purge_schedule"} : dispatch Nov 23 04:50:27 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532582.gilwrz/trash_purge_schedule"} : dispatch Nov 23 04:50:28 localhost systemd[1]: tmp-crun.tnuhIt.mount: Deactivated successfully. Nov 23 04:50:28 localhost podman[294181]: 2025-11-23 09:50:28.333135695 +0000 UTC m=+0.132216091 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , vcs-type=git, RELEASE=main, io.openshift.expose-services=, ceph=True, version=7, distribution-scope=public, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:50:28 localhost podman[294181]: 2025-11-23 09:50:28.444530623 +0000 UTC m=+0.243611019 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, RELEASE=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_BRANCH=main) Nov 23 04:50:28 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:28 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:28 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:28 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:28 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:29 localhost ceph-mon[293353]: [23/Nov/2025:09:50:28] ENGINE Bus STARTING Nov 23 04:50:29 localhost ceph-mon[293353]: [23/Nov/2025:09:50:28] ENGINE Serving on https://172.18.0.104:7150 Nov 23 04:50:29 localhost ceph-mon[293353]: [23/Nov/2025:09:50:28] ENGINE Client ('172.18.0.104', 33588) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 23 04:50:29 localhost ceph-mon[293353]: [23/Nov/2025:09:50:28] ENGINE Serving on http://172.18.0.104:8765 Nov 23 04:50:29 localhost ceph-mon[293353]: [23/Nov/2025:09:50:28] ENGINE Bus STARTED Nov 23 04:50:29 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:29 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:29 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:29 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:29 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd/host:np0005532583", "name": "osd_memory_target"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd/host:np0005532583", "name": "osd_memory_target"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532585.localdomain to 836.6M Nov 23 04:50:31 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd/host:np0005532582", "name": "osd_memory_target"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532586.localdomain to 836.6M Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd/host:np0005532582", "name": "osd_memory_target"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532584.localdomain to 836.6M Nov 23 04:50:31 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:50:31 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:50:31 localhost ceph-mon[293353]: Updating np0005532582.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:31 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:31 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:31 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:31 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:32 localhost ceph-mon[293353]: mon.np0005532584@4(peon).osd e83 _set_new_cache_sizes cache_size:1020039196 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:50:32 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:32 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:32 localhost ceph-mon[293353]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:32 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:32 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:32 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:50:33 localhost podman[294957]: 2025-11-23 09:50:33.072836164 +0000 UTC m=+0.090206087 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:50:33 localhost podman[294957]: 2025-11-23 09:50:33.105391736 +0000 UTC m=+0.122761669 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true) Nov 23 04:50:33 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:50:33 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:33 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:33 localhost ceph-mon[293353]: Updating np0005532582.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:33 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:33 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:33 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:33 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:33 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:33 localhost ceph-mon[293353]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:33 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:33 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:33 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:34 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:34 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:34 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:34 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:34 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:34 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:34 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:34 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:34 localhost ceph-mon[293353]: Reconfiguring crash.np0005532582 (monmap changed)... Nov 23 04:50:34 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532582.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:50:34 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532582.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:50:34 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532582 on np0005532582.localdomain Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:34.869882) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891434869978, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 11310, "num_deletes": 257, "total_data_size": 19338129, "memory_usage": 20133552, "flush_reason": "Manual Compaction"} Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891434925572, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 15130555, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 11315, "table_properties": {"data_size": 15069641, "index_size": 33928, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25925, "raw_key_size": 275268, "raw_average_key_size": 26, "raw_value_size": 14890546, "raw_average_value_size": 1436, "num_data_blocks": 1307, "num_entries": 10366, "num_filter_entries": 10366, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 1763891412, "file_creation_time": 1763891434, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 55749 microseconds, and 29326 cpu microseconds. Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:34.925648) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 15130555 bytes OK Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:34.925677) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:34.927605) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:34.927629) EVENT_LOG_v1 {"time_micros": 1763891434927621, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:34.927653) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 19261762, prev total WAL file size 19261762, number of live WAL files 2. Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:34.930931) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end) Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(14MB) 8(2012B)] Nov 23 04:50:34 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891434931056, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 15132567, "oldest_snapshot_seqno": -1} Nov 23 04:50:35 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 10115 keys, 15127366 bytes, temperature: kUnknown Nov 23 04:50:35 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891435001226, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 15127366, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15067029, "index_size": 33932, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 270425, "raw_average_key_size": 26, "raw_value_size": 14891201, "raw_average_value_size": 1472, "num_data_blocks": 1306, "num_entries": 10115, "num_filter_entries": 10115, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763891434, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Nov 23 04:50:35 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:50:35 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:35.001562) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 15127366 bytes Nov 23 04:50:35 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:35.003579) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 215.4 rd, 215.3 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(14.4, 0.0 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 10371, records dropped: 256 output_compression: NoCompression Nov 23 04:50:35 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:35.003610) EVENT_LOG_v1 {"time_micros": 1763891435003596, "job": 4, "event": "compaction_finished", "compaction_time_micros": 70258, "compaction_time_cpu_micros": 40277, "output_level": 6, "num_output_files": 1, "total_output_size": 15127366, "num_input_records": 10371, "num_output_records": 10115, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 04:50:35 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:50:35 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891435005811, "job": 4, "event": "table_file_deletion", "file_number": 14} Nov 23 04:50:35 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:50:35 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891435005862, "job": 4, "event": "table_file_deletion", "file_number": 8} Nov 23 04:50:35 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:34.930791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:50:35 localhost ceph-mon[293353]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Nov 23 04:50:35 localhost ceph-mon[293353]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Nov 23 04:50:35 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:35 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:35 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532582.gilwrz (monmap changed)... Nov 23 04:50:35 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532582.gilwrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:50:35 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532582.gilwrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:50:35 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532582.gilwrz on np0005532582.localdomain Nov 23 04:50:36 localhost openstack_network_exporter[241732]: ERROR 09:50:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:50:36 localhost openstack_network_exporter[241732]: ERROR 09:50:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:50:36 localhost openstack_network_exporter[241732]: Nov 23 04:50:36 localhost openstack_network_exporter[241732]: ERROR 09:50:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:50:36 localhost openstack_network_exporter[241732]: ERROR 09:50:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:50:36 localhost openstack_network_exporter[241732]: ERROR 09:50:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:50:36 localhost openstack_network_exporter[241732]: Nov 23 04:50:36 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:36 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:36 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532583.orhywt (monmap changed)... Nov 23 04:50:36 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532583.orhywt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:50:36 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532583.orhywt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:50:36 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532583.orhywt on np0005532583.localdomain Nov 23 04:50:36 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:36 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:36 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:50:36 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:50:37 localhost ceph-mon[293353]: mon.np0005532584@4(peon).osd e83 _set_new_cache_sizes cache_size:1020054440 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:50:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:50:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:50:37 localhost systemd[1]: tmp-crun.8v6dLn.mount: Deactivated successfully. Nov 23 04:50:37 localhost podman[295118]: 2025-11-23 09:50:37.797543022 +0000 UTC m=+0.097403189 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 04:50:37 localhost podman[295118]: 2025-11-23 09:50:37.810286635 +0000 UTC m=+0.110146822 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 04:50:37 localhost podman[295119]: 2025-11-23 09:50:37.823116169 +0000 UTC m=+0.120671375 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible) Nov 23 04:50:37 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:50:37 localhost podman[295119]: 2025-11-23 09:50:37.925556453 +0000 UTC m=+0.223111619 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118) Nov 23 04:50:37 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:50:38 localhost ceph-mon[293353]: Reconfiguring crash.np0005532583 (monmap changed)... Nov 23 04:50:38 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain Nov 23 04:50:38 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:38 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:38 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:38 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:50:38 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:50:38 localhost podman[295197]: Nov 23 04:50:38 localhost podman[295197]: 2025-11-23 09:50:38.224495673 +0000 UTC m=+0.082670705 container create 90e5d18063123c6d7ace21a5979e70dead30ec2e5e96aeedef0e7d485259160d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_noether, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, version=7, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, ceph=True, description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, release=553, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:50:38 localhost systemd[1]: Started libpod-conmon-90e5d18063123c6d7ace21a5979e70dead30ec2e5e96aeedef0e7d485259160d.scope. Nov 23 04:50:38 localhost systemd[1]: Started libcrun container. Nov 23 04:50:38 localhost podman[295197]: 2025-11-23 09:50:38.189329011 +0000 UTC m=+0.047504053 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:50:38 localhost podman[295197]: 2025-11-23 09:50:38.294056504 +0000 UTC m=+0.152231536 container init 90e5d18063123c6d7ace21a5979e70dead30ec2e5e96aeedef0e7d485259160d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_noether, build-date=2025-09-24T08:57:55, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_BRANCH=main) Nov 23 04:50:38 localhost podman[295197]: 2025-11-23 09:50:38.303868206 +0000 UTC m=+0.162043228 container start 90e5d18063123c6d7ace21a5979e70dead30ec2e5e96aeedef0e7d485259160d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_noether, ceph=True, architecture=x86_64, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, release=553, GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_CLEAN=True, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12) Nov 23 04:50:38 localhost podman[295197]: 2025-11-23 09:50:38.304157305 +0000 UTC m=+0.162332337 container attach 90e5d18063123c6d7ace21a5979e70dead30ec2e5e96aeedef0e7d485259160d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_noether, version=7, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, name=rhceph, CEPH_POINT_RELEASE=, distribution-scope=public, release=553, RELEASE=main, com.redhat.component=rhceph-container, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-09-24T08:57:55) Nov 23 04:50:38 localhost busy_noether[295211]: 167 167 Nov 23 04:50:38 localhost systemd[1]: libpod-90e5d18063123c6d7ace21a5979e70dead30ec2e5e96aeedef0e7d485259160d.scope: Deactivated successfully. Nov 23 04:50:38 localhost podman[295197]: 2025-11-23 09:50:38.308576601 +0000 UTC m=+0.166751633 container died 90e5d18063123c6d7ace21a5979e70dead30ec2e5e96aeedef0e7d485259160d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_noether, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, ceph=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, version=7, release=553, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph) Nov 23 04:50:38 localhost podman[295216]: 2025-11-23 09:50:38.408769325 +0000 UTC m=+0.086309438 container remove 90e5d18063123c6d7ace21a5979e70dead30ec2e5e96aeedef0e7d485259160d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_noether, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, release=553, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, distribution-scope=public, name=rhceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.openshift.expose-services=, CEPH_POINT_RELEASE=) Nov 23 04:50:38 localhost systemd[1]: libpod-conmon-90e5d18063123c6d7ace21a5979e70dead30ec2e5e96aeedef0e7d485259160d.scope: Deactivated successfully. Nov 23 04:50:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:50:38 localhost podman[295245]: 2025-11-23 09:50:38.662318645 +0000 UTC m=+0.086409117 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:50:38 localhost podman[295245]: 2025-11-23 09:50:38.677470873 +0000 UTC m=+0.101561335 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:50:38 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:50:39 localhost ceph-mon[293353]: Reconfiguring crash.np0005532584 (monmap changed)... Nov 23 04:50:39 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain Nov 23 04:50:39 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:39 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:39 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 23 04:50:39 localhost podman[295306]: Nov 23 04:50:39 localhost podman[295306]: 2025-11-23 09:50:39.169709333 +0000 UTC m=+0.076500446 container create 6983d425b5ca366e88bddc1fc61ca43fe920728fd0216c8c2d8928fe9ba8d3c4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_shamir, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , release=553) Nov 23 04:50:39 localhost systemd[1]: Started libpod-conmon-6983d425b5ca366e88bddc1fc61ca43fe920728fd0216c8c2d8928fe9ba8d3c4.scope. Nov 23 04:50:39 localhost systemd[1]: Started libcrun container. Nov 23 04:50:39 localhost podman[295306]: 2025-11-23 09:50:39.232770413 +0000 UTC m=+0.139561526 container init 6983d425b5ca366e88bddc1fc61ca43fe920728fd0216c8c2d8928fe9ba8d3c4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_shamir, maintainer=Guillaume Abrioux , architecture=x86_64, build-date=2025-09-24T08:57:55, release=553, vcs-type=git, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, version=7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:50:39 localhost podman[295306]: 2025-11-23 09:50:39.138955872 +0000 UTC m=+0.045747045 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:50:39 localhost podman[295306]: 2025-11-23 09:50:39.241680109 +0000 UTC m=+0.148471222 container start 6983d425b5ca366e88bddc1fc61ca43fe920728fd0216c8c2d8928fe9ba8d3c4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_shamir, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_CLEAN=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, release=553, architecture=x86_64) Nov 23 04:50:39 localhost podman[295306]: 2025-11-23 09:50:39.241928766 +0000 UTC m=+0.148719879 container attach 6983d425b5ca366e88bddc1fc61ca43fe920728fd0216c8c2d8928fe9ba8d3c4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_shamir, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, RELEASE=main, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, version=7, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=) Nov 23 04:50:39 localhost pedantic_shamir[295321]: 167 167 Nov 23 04:50:39 localhost systemd[1]: libpod-6983d425b5ca366e88bddc1fc61ca43fe920728fd0216c8c2d8928fe9ba8d3c4.scope: Deactivated successfully. Nov 23 04:50:39 localhost podman[295306]: 2025-11-23 09:50:39.245779805 +0000 UTC m=+0.152570928 container died 6983d425b5ca366e88bddc1fc61ca43fe920728fd0216c8c2d8928fe9ba8d3c4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_shamir, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., architecture=x86_64, ceph=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, name=rhceph, release=553, io.openshift.tags=rhceph ceph, version=7) Nov 23 04:50:39 localhost podman[295326]: 2025-11-23 09:50:39.343313511 +0000 UTC m=+0.084443372 container remove 6983d425b5ca366e88bddc1fc61ca43fe920728fd0216c8c2d8928fe9ba8d3c4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_shamir, ceph=True, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, release=553, name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=) Nov 23 04:50:39 localhost systemd[1]: libpod-conmon-6983d425b5ca366e88bddc1fc61ca43fe920728fd0216c8c2d8928fe9ba8d3c4.scope: Deactivated successfully. Nov 23 04:50:39 localhost systemd[1]: var-lib-containers-storage-overlay-d47cba38045fabfe67e43221d4a4819780818e91412212ef2a570be31ffd0160-merged.mount: Deactivated successfully. Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:39.853720) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891439853782, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 459, "num_deletes": 254, "total_data_size": 475331, "memory_usage": 485432, "flush_reason": "Manual Compaction"} Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891439858965, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 290260, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11320, "largest_seqno": 11774, "table_properties": {"data_size": 287465, "index_size": 842, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6400, "raw_average_key_size": 18, "raw_value_size": 281734, "raw_average_value_size": 795, "num_data_blocks": 33, "num_entries": 354, "num_filter_entries": 354, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891435, "oldest_key_time": 1763891435, "file_creation_time": 1763891439, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 5327 microseconds, and 1999 cpu microseconds. Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:39.859052) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 290260 bytes OK Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:39.859074) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:39.860868) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:39.860891) EVENT_LOG_v1 {"time_micros": 1763891439860885, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:39.860912) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 472392, prev total WAL file size 472392, number of live WAL files 2. Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:39.861761) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303130' seq:72057594037927935, type:22 .. '6B760031323635' seq:0, type:0; will stop at (end) Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(283KB)], [15(14MB)] Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891439861813, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 15417626, "oldest_snapshot_seqno": -1} Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 9940 keys, 14425958 bytes, temperature: kUnknown Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891439929731, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 14425958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14367805, "index_size": 32196, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24901, "raw_key_size": 268394, "raw_average_key_size": 27, "raw_value_size": 14195929, "raw_average_value_size": 1428, "num_data_blocks": 1215, "num_entries": 9940, "num_filter_entries": 9940, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763891439, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:39.930862) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 14425958 bytes Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:39.933177) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 226.8 rd, 212.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.4 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(102.8) write-amplify(49.7) OK, records in: 10469, records dropped: 529 output_compression: NoCompression Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:39.933289) EVENT_LOG_v1 {"time_micros": 1763891439933198, "job": 6, "event": "compaction_finished", "compaction_time_micros": 67988, "compaction_time_cpu_micros": 40765, "output_level": 6, "num_output_files": 1, "total_output_size": 14425958, "num_input_records": 10469, "num_output_records": 9940, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891439933894, "job": 6, "event": "table_file_deletion", "file_number": 17} Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891439936588, "job": 6, "event": "table_file_deletion", "file_number": 15} Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:39.861665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:39.936784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:39.936792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:39.936795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:39.936798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:50:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:50:39.936800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:50:40 localhost ceph-mon[293353]: Reconfiguring osd.2 (monmap changed)... Nov 23 04:50:40 localhost ceph-mon[293353]: Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:50:40 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:40 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:40 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:40 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:40 localhost ceph-mon[293353]: Reconfiguring osd.5 (monmap changed)... Nov 23 04:50:40 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 23 04:50:40 localhost ceph-mon[293353]: Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:50:40 localhost podman[295402]: Nov 23 04:50:40 localhost podman[295402]: 2025-11-23 09:50:40.2035376 +0000 UTC m=+0.074577487 container create 4255cf285159d7784c61a875e295a1bd5d500eb3afcfaed4a68fb72c997b6d54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_lederberg, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, name=rhceph, architecture=x86_64, GIT_CLEAN=True) Nov 23 04:50:40 localhost systemd[1]: Started libpod-conmon-4255cf285159d7784c61a875e295a1bd5d500eb3afcfaed4a68fb72c997b6d54.scope. Nov 23 04:50:40 localhost systemd[1]: Started libcrun container. Nov 23 04:50:40 localhost podman[295402]: 2025-11-23 09:50:40.170847579 +0000 UTC m=+0.041887546 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:50:40 localhost podman[295402]: 2025-11-23 09:50:40.281740688 +0000 UTC m=+0.152780575 container init 4255cf285159d7784c61a875e295a1bd5d500eb3afcfaed4a68fb72c997b6d54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_lederberg, build-date=2025-09-24T08:57:55, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_BRANCH=main, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:50:40 localhost objective_lederberg[295417]: 167 167 Nov 23 04:50:40 localhost systemd[1]: libpod-4255cf285159d7784c61a875e295a1bd5d500eb3afcfaed4a68fb72c997b6d54.scope: Deactivated successfully. Nov 23 04:50:40 localhost podman[295402]: 2025-11-23 09:50:40.292178551 +0000 UTC m=+0.163218438 container start 4255cf285159d7784c61a875e295a1bd5d500eb3afcfaed4a68fb72c997b6d54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_lederberg, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, RELEASE=main, release=553, GIT_CLEAN=True, io.buildah.version=1.33.12, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux ) Nov 23 04:50:40 localhost podman[295402]: 2025-11-23 09:50:40.29246121 +0000 UTC m=+0.163501137 container attach 4255cf285159d7784c61a875e295a1bd5d500eb3afcfaed4a68fb72c997b6d54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_lederberg, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.buildah.version=1.33.12, version=7, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:50:40 localhost podman[295402]: 2025-11-23 09:50:40.29476775 +0000 UTC m=+0.165807647 container died 4255cf285159d7784c61a875e295a1bd5d500eb3afcfaed4a68fb72c997b6d54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_lederberg, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, maintainer=Guillaume Abrioux , vcs-type=git, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553) Nov 23 04:50:40 localhost podman[295422]: 2025-11-23 09:50:40.388952843 +0000 UTC m=+0.081156590 container remove 4255cf285159d7784c61a875e295a1bd5d500eb3afcfaed4a68fb72c997b6d54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_lederberg, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, version=7, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.buildah.version=1.33.12, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:50:40 localhost systemd[1]: libpod-conmon-4255cf285159d7784c61a875e295a1bd5d500eb3afcfaed4a68fb72c997b6d54.scope: Deactivated successfully. Nov 23 04:50:40 localhost systemd[1]: var-lib-containers-storage-overlay-3b9e33d91bbd7b00be01d09a4fb7faf6db45bfa4b832ce8d7ca30dd0fc138988-merged.mount: Deactivated successfully. Nov 23 04:50:41 localhost podman[295500]: Nov 23 04:50:41 localhost podman[295500]: 2025-11-23 09:50:41.213045544 +0000 UTC m=+0.079106407 container create 03184a6648cdd3733fc71fe141d7ba1bc038c792a4f9dec5063292d2a5150b7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_darwin, version=7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, distribution-scope=public, release=553, GIT_CLEAN=True, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:50:41 localhost systemd[1]: Started libpod-conmon-03184a6648cdd3733fc71fe141d7ba1bc038c792a4f9dec5063292d2a5150b7c.scope. Nov 23 04:50:41 localhost systemd[1]: Started libcrun container. Nov 23 04:50:41 localhost podman[295500]: 2025-11-23 09:50:41.182371826 +0000 UTC m=+0.048432719 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:50:41 localhost podman[295500]: 2025-11-23 09:50:41.286025401 +0000 UTC m=+0.152086254 container init 03184a6648cdd3733fc71fe141d7ba1bc038c792a4f9dec5063292d2a5150b7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_darwin, io.openshift.tags=rhceph ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, release=553, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhceph, maintainer=Guillaume Abrioux , version=7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, distribution-scope=public) Nov 23 04:50:41 localhost podman[295500]: 2025-11-23 09:50:41.297020151 +0000 UTC m=+0.163081004 container start 03184a6648cdd3733fc71fe141d7ba1bc038c792a4f9dec5063292d2a5150b7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_darwin, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, name=rhceph, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, build-date=2025-09-24T08:57:55) Nov 23 04:50:41 localhost podman[295500]: 2025-11-23 09:50:41.297266568 +0000 UTC m=+0.163327461 container attach 03184a6648cdd3733fc71fe141d7ba1bc038c792a4f9dec5063292d2a5150b7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_darwin, GIT_CLEAN=True, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, version=7, RELEASE=main, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main) Nov 23 04:50:41 localhost unruffled_darwin[295515]: 167 167 Nov 23 04:50:41 localhost systemd[1]: libpod-03184a6648cdd3733fc71fe141d7ba1bc038c792a4f9dec5063292d2a5150b7c.scope: Deactivated successfully. Nov 23 04:50:41 localhost podman[295500]: 2025-11-23 09:50:41.301811969 +0000 UTC m=+0.167872882 container died 03184a6648cdd3733fc71fe141d7ba1bc038c792a4f9dec5063292d2a5150b7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_darwin, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, release=553, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.component=rhceph-container, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12) Nov 23 04:50:41 localhost podman[295520]: 2025-11-23 09:50:41.394079572 +0000 UTC m=+0.078448026 container remove 03184a6648cdd3733fc71fe141d7ba1bc038c792a4f9dec5063292d2a5150b7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_darwin, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.buildah.version=1.33.12, GIT_BRANCH=main, architecture=x86_64, release=553, vcs-type=git, build-date=2025-09-24T08:57:55, version=7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.expose-services=, name=rhceph) Nov 23 04:50:41 localhost systemd[1]: libpod-conmon-03184a6648cdd3733fc71fe141d7ba1bc038c792a4f9dec5063292d2a5150b7c.scope: Deactivated successfully. Nov 23 04:50:41 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:41 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:41 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:41 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:41 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)... Nov 23 04:50:41 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:50:41 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:50:41 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain Nov 23 04:50:41 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:41 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:41 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:41 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:50:41 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:50:41 localhost systemd[1]: tmp-crun.HEFhUL.mount: Deactivated successfully. Nov 23 04:50:41 localhost systemd[1]: var-lib-containers-storage-overlay-b879b9812a26410ac4f11e33b46bf92353d337faecc1167d3962d15a4f0618a4-merged.mount: Deactivated successfully. Nov 23 04:50:42 localhost ceph-mon[293353]: mon.np0005532584@4(peon).osd e83 _set_new_cache_sizes cache_size:1020054726 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:50:42 localhost podman[295588]: Nov 23 04:50:42 localhost podman[295588]: 2025-11-23 09:50:42.130104941 +0000 UTC m=+0.073902897 container create 92a5fdb932504f38863fbe16ac0c02bf3a469fd253e4e75223139d2795504847 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_colden, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.buildah.version=1.33.12, release=553, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:50:42 localhost systemd[1]: Started libpod-conmon-92a5fdb932504f38863fbe16ac0c02bf3a469fd253e4e75223139d2795504847.scope. Nov 23 04:50:42 localhost systemd[1]: Started libcrun container. Nov 23 04:50:42 localhost podman[295588]: 2025-11-23 09:50:42.192950334 +0000 UTC m=+0.136748290 container init 92a5fdb932504f38863fbe16ac0c02bf3a469fd253e4e75223139d2795504847 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_colden, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, vendor=Red Hat, Inc., RELEASE=main, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container) Nov 23 04:50:42 localhost podman[295588]: 2025-11-23 09:50:42.099461223 +0000 UTC m=+0.043259229 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:50:42 localhost podman[295588]: 2025-11-23 09:50:42.204053547 +0000 UTC m=+0.147851503 container start 92a5fdb932504f38863fbe16ac0c02bf3a469fd253e4e75223139d2795504847 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_colden, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, name=rhceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , release=553, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_CLEAN=True) Nov 23 04:50:42 localhost podman[295588]: 2025-11-23 09:50:42.204285334 +0000 UTC m=+0.148083290 container attach 92a5fdb932504f38863fbe16ac0c02bf3a469fd253e4e75223139d2795504847 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_colden, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, name=rhceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64) Nov 23 04:50:42 localhost intelligent_colden[295603]: 167 167 Nov 23 04:50:42 localhost systemd[1]: libpod-92a5fdb932504f38863fbe16ac0c02bf3a469fd253e4e75223139d2795504847.scope: Deactivated successfully. Nov 23 04:50:42 localhost podman[295588]: 2025-11-23 09:50:42.207423461 +0000 UTC m=+0.151221427 container died 92a5fdb932504f38863fbe16ac0c02bf3a469fd253e4e75223139d2795504847 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_colden, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, com.redhat.component=rhceph-container, RELEASE=main, release=553, version=7, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7) Nov 23 04:50:42 localhost podman[295608]: 2025-11-23 09:50:42.30212157 +0000 UTC m=+0.082716070 container remove 92a5fdb932504f38863fbe16ac0c02bf3a469fd253e4e75223139d2795504847 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_colden, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.expose-services=, build-date=2025-09-24T08:57:55) Nov 23 04:50:42 localhost systemd[1]: libpod-conmon-92a5fdb932504f38863fbe16ac0c02bf3a469fd253e4e75223139d2795504847.scope: Deactivated successfully. Nov 23 04:50:42 localhost ceph-mon[293353]: Saving service mon spec with placement label:mon Nov 23 04:50:42 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)... Nov 23 04:50:42 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain Nov 23 04:50:42 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:42 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:42 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:50:42 localhost systemd[1]: tmp-crun.uBQ2ZW.mount: Deactivated successfully. Nov 23 04:50:42 localhost systemd[1]: var-lib-containers-storage-overlay-1378d908465672ee9446e645c939eb6f8873b1cd61ad56ba0e810f6005e05524-merged.mount: Deactivated successfully. Nov 23 04:50:43 localhost podman[295680]: Nov 23 04:50:43 localhost podman[295680]: 2025-11-23 09:50:43.096822073 +0000 UTC m=+0.081432149 container create 87d78ece19337bb65e2266aeb1278f8841b81e58d7f56ce22187d6b33ff093ab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_robinson, RELEASE=main, description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, version=7, maintainer=Guillaume Abrioux ) Nov 23 04:50:43 localhost systemd[1]: Started libpod-conmon-87d78ece19337bb65e2266aeb1278f8841b81e58d7f56ce22187d6b33ff093ab.scope. Nov 23 04:50:43 localhost systemd[1]: Started libcrun container. Nov 23 04:50:43 localhost podman[295680]: 2025-11-23 09:50:43.063202253 +0000 UTC m=+0.047812359 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:50:43 localhost podman[295680]: 2025-11-23 09:50:43.167397685 +0000 UTC m=+0.152007761 container init 87d78ece19337bb65e2266aeb1278f8841b81e58d7f56ce22187d6b33ff093ab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_robinson, GIT_BRANCH=main, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, version=7, vcs-type=git, io.openshift.tags=rhceph ceph) Nov 23 04:50:43 localhost podman[295680]: 2025-11-23 09:50:43.180169519 +0000 UTC m=+0.164779595 container start 87d78ece19337bb65e2266aeb1278f8841b81e58d7f56ce22187d6b33ff093ab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_robinson, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, release=553, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64, maintainer=Guillaume Abrioux , vcs-type=git, name=rhceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55) Nov 23 04:50:43 localhost podman[295680]: 2025-11-23 09:50:43.18050052 +0000 UTC m=+0.165110596 container attach 87d78ece19337bb65e2266aeb1278f8841b81e58d7f56ce22187d6b33ff093ab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_robinson, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, GIT_BRANCH=main, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 04:50:43 localhost gracious_robinson[295695]: 167 167 Nov 23 04:50:43 localhost systemd[1]: libpod-87d78ece19337bb65e2266aeb1278f8841b81e58d7f56ce22187d6b33ff093ab.scope: Deactivated successfully. Nov 23 04:50:43 localhost podman[295680]: 2025-11-23 09:50:43.184523504 +0000 UTC m=+0.169133580 container died 87d78ece19337bb65e2266aeb1278f8841b81e58d7f56ce22187d6b33ff093ab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_robinson, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph, distribution-scope=public, release=553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , GIT_BRANCH=main, ceph=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:50:43 localhost podman[295700]: 2025-11-23 09:50:43.278557322 +0000 UTC m=+0.082744160 container remove 87d78ece19337bb65e2266aeb1278f8841b81e58d7f56ce22187d6b33ff093ab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_robinson, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_BRANCH=main, ceph=True, com.redhat.component=rhceph-container, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, architecture=x86_64) Nov 23 04:50:43 localhost systemd[1]: libpod-conmon-87d78ece19337bb65e2266aeb1278f8841b81e58d7f56ce22187d6b33ff093ab.scope: Deactivated successfully. Nov 23 04:50:43 localhost ceph-mon[293353]: Reconfiguring mon.np0005532584 (monmap changed)... Nov 23 04:50:43 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain Nov 23 04:50:43 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:43 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:43 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:50:43 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:50:43 localhost systemd[1]: tmp-crun.VVCwJ7.mount: Deactivated successfully. Nov 23 04:50:43 localhost systemd[1]: var-lib-containers-storage-overlay-be16f064323c1fca1d8f53ef7f6c6d96e308da332d77f699cd45c21af6a48922-merged.mount: Deactivated successfully. Nov 23 04:50:44 localhost ceph-mon[293353]: Reconfiguring crash.np0005532585 (monmap changed)... Nov 23 04:50:44 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain Nov 23 04:50:44 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:44 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:44 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 23 04:50:45 localhost ceph-mon[293353]: Reconfiguring osd.0 (monmap changed)... Nov 23 04:50:45 localhost ceph-mon[293353]: Reconfiguring daemon osd.0 on np0005532585.localdomain Nov 23 04:50:45 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:45 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:45 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:45 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:45 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 23 04:50:46 localhost ceph-mon[293353]: Reconfiguring osd.3 (monmap changed)... Nov 23 04:50:46 localhost ceph-mon[293353]: Reconfiguring daemon osd.3 on np0005532585.localdomain Nov 23 04:50:46 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:46 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:46 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:46 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:46 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:50:46 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:50:47 localhost podman[239764]: time="2025-11-23T09:50:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:50:47 localhost ceph-mon[293353]: mon.np0005532584@4(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:50:47 localhost podman[239764]: @ - - [23/Nov/2025:09:50:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:50:47 localhost podman[239764]: @ - - [23/Nov/2025:09:50:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18218 "" "Go-http-client/1.1" Nov 23 04:50:47 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)... Nov 23 04:50:47 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain Nov 23 04:50:47 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:47 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:47 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:50:47 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:50:48 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)... Nov 23 04:50:48 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain Nov 23 04:50:48 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:48 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:48 localhost ceph-mon[293353]: Reconfiguring mon.np0005532585 (monmap changed)... Nov 23 04:50:48 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:50:48 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532585 on np0005532585.localdomain Nov 23 04:50:48 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:48 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:48 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 23 04:50:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:50:49 localhost podman[295716]: 2025-11-23 09:50:49.904076128 +0000 UTC m=+0.089036874 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git) Nov 23 04:50:49 localhost podman[295716]: 2025-11-23 09:50:49.924549091 +0000 UTC m=+0.109509867 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41) Nov 23 04:50:49 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:50:49 localhost ceph-mon[293353]: Reconfiguring daemon osd.1 on np0005532586.localdomain Nov 23 04:50:50 localhost systemd[1]: session-65.scope: Deactivated successfully. Nov 23 04:50:50 localhost systemd[1]: session-65.scope: Consumed 1.644s CPU time. Nov 23 04:50:50 localhost systemd-logind[760]: Session 65 logged out. Waiting for processes to exit. Nov 23 04:50:50 localhost systemd-logind[760]: Removed session 65. Nov 23 04:50:50 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:50 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:50 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:50 localhost ceph-mon[293353]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' Nov 23 04:50:50 localhost ceph-mon[293353]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 23 04:50:51 localhost ceph-mon[293353]: mon.np0005532584@4(peon).osd e84 e84: 6 total, 6 up, 6 in Nov 23 04:50:51 localhost ceph-mgr[286671]: mgr handle_mgr_map Activating! Nov 23 04:50:51 localhost ceph-mgr[286671]: mgr handle_mgr_map I am now activating Nov 23 04:50:51 localhost ceph-mgr[286671]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:50:51 localhost ceph-mgr[286671]: mgr load Constructed class from module: balancer Nov 23 04:50:51 localhost ceph-mgr[286671]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:50:51 localhost ceph-mgr[286671]: [balancer INFO root] Starting Nov 23 04:50:51 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_09:50:51 Nov 23 04:50:51 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 04:50:51 localhost ceph-mgr[286671]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Nov 23 04:50:51 localhost systemd-logind[760]: Session 67 logged out. Waiting for processes to exit. Nov 23 04:50:51 localhost systemd[1]: session-67.scope: Deactivated successfully. Nov 23 04:50:51 localhost systemd[1]: session-67.scope: Consumed 10.912s CPU time. Nov 23 04:50:51 localhost systemd-logind[760]: Removed session 67. Nov 23 04:50:51 localhost ceph-mgr[286671]: mgr load Constructed class from module: cephadm Nov 23 04:50:51 localhost ceph-mgr[286671]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:50:51 localhost ceph-mgr[286671]: mgr load Constructed class from module: crash Nov 23 04:50:51 localhost ceph-mgr[286671]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:50:51 localhost ceph-mgr[286671]: mgr load Constructed class from module: devicehealth Nov 23 04:50:51 localhost ceph-mgr[286671]: [devicehealth INFO root] Starting Nov 23 04:50:51 localhost ceph-mgr[286671]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:50:51 localhost ceph-mgr[286671]: mgr load Constructed class from module: iostat Nov 23 04:50:51 localhost ceph-mgr[286671]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:50:51 localhost ceph-mgr[286671]: mgr load Constructed class from module: nfs Nov 23 04:50:51 localhost ceph-mgr[286671]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:50:51 localhost ceph-mgr[286671]: mgr load Constructed class from module: orchestrator Nov 23 04:50:51 localhost ceph-mgr[286671]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:50:51 localhost ceph-mgr[286671]: mgr load Constructed class from module: pg_autoscaler Nov 23 04:50:51 localhost ceph-mgr[286671]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:50:51 localhost ceph-mgr[286671]: mgr load Constructed class from module: progress Nov 23 04:50:51 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:50:51 localhost ceph-mgr[286671]: [progress INFO root] Loading... Nov 23 04:50:51 localhost ceph-mgr[286671]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Nov 23 04:50:51 localhost ceph-mgr[286671]: [progress INFO root] Loaded OSDMap, ready. Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] recovery thread starting Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] starting setup Nov 23 04:50:51 localhost ceph-mgr[286671]: mgr load Constructed class from module: rbd_support Nov 23 04:50:51 localhost ceph-mgr[286671]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:50:51 localhost ceph-mgr[286671]: mgr load Constructed class from module: restful Nov 23 04:50:51 localhost ceph-mgr[286671]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:50:51 localhost ceph-mgr[286671]: mgr load Constructed class from module: status Nov 23 04:50:51 localhost ceph-mgr[286671]: [restful INFO root] server_addr: :: server_port: 8003 Nov 23 04:50:51 localhost ceph-mgr[286671]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:50:51 localhost ceph-mgr[286671]: mgr load Constructed class from module: telemetry Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 04:50:51 localhost ceph-mgr[286671]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 04:50:51 localhost ceph-mgr[286671]: [restful WARNING root] server not running: no certificate configured Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] PerfHandler: starting Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_task_task: vms, start_after= Nov 23 04:50:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 04:50:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 04:50:51 localhost ceph-mgr[286671]: mgr load Constructed class from module: volumes Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_task_task: volumes, start_after= Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_task_task: images, start_after= Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_task_task: backups, start_after= Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] TaskHandler: starting Nov 23 04:50:51 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:50:51.252+0000 7ff9c2510640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:50:51.253+0000 7ff9c2510640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:50:51.253+0000 7ff9c2510640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:50:51.253+0000 7ff9c2510640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:50:51.253+0000 7ff9c2510640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 04:50:51 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:50:51.258+0000 7ff9bccc5640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:50:51.258+0000 7ff9bccc5640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:50:51.258+0000 7ff9bccc5640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:50:51.258+0000 7ff9bccc5640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 04:50:51 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:50:51.258+0000 7ff9bccc5640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Nov 23 04:50:51 localhost ceph-mgr[286671]: [rbd_support INFO root] setup complete Nov 23 04:50:51 localhost sshd[295877]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:50:51 localhost systemd-logind[760]: New session 68 of user ceph-admin. Nov 23 04:50:51 localhost systemd[1]: Started Session 68 of User ceph-admin. Nov 23 04:50:51 localhost ceph-mon[293353]: Reconfiguring daemon osd.4 on np0005532586.localdomain Nov 23 04:50:51 localhost ceph-mon[293353]: from='client.? 172.18.0.200:0/3363667457' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 23 04:50:51 localhost ceph-mon[293353]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 23 04:50:51 localhost ceph-mon[293353]: Activating manager daemon np0005532584.naxwxy Nov 23 04:50:51 localhost ceph-mon[293353]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Nov 23 04:50:51 localhost ceph-mon[293353]: Manager daemon np0005532584.naxwxy is now available Nov 23 04:50:51 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532584.naxwxy/mirror_snapshot_schedule"} : dispatch Nov 23 04:50:51 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532584.naxwxy/trash_purge_schedule"} : dispatch Nov 23 04:50:52 localhost ceph-mon[293353]: mon.np0005532584@4(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:50:52 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:50:52 localhost systemd[1]: tmp-crun.OrNyOS.mount: Deactivated successfully. Nov 23 04:50:52 localhost ceph-mgr[286671]: [cephadm INFO cherrypy.error] [23/Nov/2025:09:50:52] ENGINE Bus STARTING Nov 23 04:50:52 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : [23/Nov/2025:09:50:52] ENGINE Bus STARTING Nov 23 04:50:52 localhost podman[295989]: 2025-11-23 09:50:52.58003478 +0000 UTC m=+0.110456857 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, distribution-scope=public, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container) Nov 23 04:50:52 localhost podman[295989]: 2025-11-23 09:50:52.680878088 +0000 UTC m=+0.211300175 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., ceph=True, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=553, GIT_CLEAN=True, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.openshift.expose-services=, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, name=rhceph, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.33.12) Nov 23 04:50:52 localhost ceph-mgr[286671]: [cephadm INFO cherrypy.error] [23/Nov/2025:09:50:52] ENGINE Serving on https://172.18.0.106:7150 Nov 23 04:50:52 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : [23/Nov/2025:09:50:52] ENGINE Serving on https://172.18.0.106:7150 Nov 23 04:50:52 localhost ceph-mgr[286671]: [cephadm INFO cherrypy.error] [23/Nov/2025:09:50:52] ENGINE Client ('172.18.0.106', 58208) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 23 04:50:52 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : [23/Nov/2025:09:50:52] ENGINE Client ('172.18.0.106', 58208) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 23 04:50:52 localhost ceph-mgr[286671]: [cephadm INFO cherrypy.error] [23/Nov/2025:09:50:52] ENGINE Serving on http://172.18.0.106:8765 Nov 23 04:50:52 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : [23/Nov/2025:09:50:52] ENGINE Serving on http://172.18.0.106:8765 Nov 23 04:50:52 localhost ceph-mgr[286671]: [cephadm INFO cherrypy.error] [23/Nov/2025:09:50:52] ENGINE Bus STARTED Nov 23 04:50:52 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : [23/Nov/2025:09:50:52] ENGINE Bus STARTED Nov 23 04:50:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:50:53 localhost ceph-mgr[286671]: [devicehealth INFO root] Check health Nov 23 04:50:54 localhost ceph-mon[293353]: [23/Nov/2025:09:50:52] ENGINE Bus STARTING Nov 23 04:50:54 localhost ceph-mon[293353]: [23/Nov/2025:09:50:52] ENGINE Serving on https://172.18.0.106:7150 Nov 23 04:50:54 localhost ceph-mon[293353]: [23/Nov/2025:09:50:52] ENGINE Client ('172.18.0.106', 58208) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 23 04:50:54 localhost ceph-mon[293353]: [23/Nov/2025:09:50:52] ENGINE Serving on http://172.18.0.106:8765 Nov 23 04:50:54 localhost ceph-mon[293353]: [23/Nov/2025:09:50:52] ENGINE Bus STARTED Nov 23 04:50:54 localhost ceph-mon[293353]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Nov 23 04:50:54 localhost ceph-mon[293353]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Nov 23 04:50:54 localhost ceph-mon[293353]: Cluster is now healthy Nov 23 04:50:54 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:54 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:54 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:54 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:54 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:54 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:54 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:54 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:54 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:54 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:50:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:50:54 localhost podman[296226]: 2025-11-23 09:50:54.494198657 +0000 UTC m=+0.095960038 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:50:54 localhost podman[296226]: 2025-11-23 09:50:54.507713165 +0000 UTC m=+0.109474496 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:50:54 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:50:54 localhost podman[296225]: 2025-11-23 09:50:54.600953688 +0000 UTC m=+0.201466001 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118) Nov 23 04:50:54 localhost podman[296225]: 2025-11-23 09:50:54.608677067 +0000 UTC m=+0.209189420 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 23 04:50:54 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:50:54 localhost ceph-mgr[286671]: [cephadm INFO root] Adjusting osd_memory_target on np0005532585.localdomain to 836.6M Nov 23 04:50:54 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005532585.localdomain to 836.6M Nov 23 04:50:54 localhost ceph-mgr[286671]: [cephadm INFO root] Adjusting osd_memory_target on np0005532586.localdomain to 836.6M Nov 23 04:50:54 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005532586.localdomain to 836.6M Nov 23 04:50:54 localhost ceph-mgr[286671]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:50:54 localhost ceph-mgr[286671]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:50:54 localhost ceph-mgr[286671]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Nov 23 04:50:54 localhost ceph-mgr[286671]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Nov 23 04:50:54 localhost ceph-mgr[286671]: [cephadm INFO root] Adjusting osd_memory_target on np0005532584.localdomain to 836.6M Nov 23 04:50:54 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005532584.localdomain to 836.6M Nov 23 04:50:54 localhost ceph-mgr[286671]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:50:54 localhost ceph-mgr[286671]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:50:54 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532582.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:54 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532582.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:54 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532583.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:54 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532583.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:54 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:54 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:54 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:54 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:54 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:54 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd/host:np0005532582", "name": "osd_memory_target"} : dispatch Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd/host:np0005532583", "name": "osd_memory_target"} : dispatch Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 23 04:50:55 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532585.localdomain to 836.6M Nov 23 04:50:55 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532586.localdomain to 836.6M Nov 23 04:50:55 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:50:55 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 23 04:50:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:50:55 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532584.localdomain to 836.6M Nov 23 04:50:55 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:50:55 localhost ceph-mon[293353]: Updating np0005532582.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:55 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:55 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:55 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:55 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:50:55 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:55 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:55 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:55 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:55 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:55 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:55 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:55 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:55 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:55 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:56 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532583.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532583.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mgr.np0005532582.gilwrz 172.18.0.104:0/1880633502; not ready for session (expect reconnect) Nov 23 04:50:56 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532582.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532582.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:56 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:56 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:56 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:56 localhost ceph-mon[293353]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:50:56 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:56 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:50:57 localhost ceph-mon[293353]: mon.np0005532584@4(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:50:57 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev aab4416f-7350-42cb-83a3-0956b9dc6322 (Updating node-proxy deployment (+5 -> 5)) Nov 23 04:50:57 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev aab4416f-7350-42cb-83a3-0956b9dc6322 (Updating node-proxy deployment (+5 -> 5)) Nov 23 04:50:57 localhost ceph-mgr[286671]: [progress INFO root] Completed event aab4416f-7350-42cb-83a3-0956b9dc6322 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Nov 23 04:50:57 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:57 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:57 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:57 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:57 localhost ceph-mon[293353]: Updating np0005532582.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:50:57 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:57 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:57 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:57 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:57 localhost ceph-mon[293353]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:50:57 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:57 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:57 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:57 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:57 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:57 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:57 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:57 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:57 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:57 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:57 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:57 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:50:58 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005532582 (monmap changed)... Nov 23 04:50:58 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005532582 (monmap changed)... Nov 23 04:50:58 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005532582 on np0005532582.localdomain Nov 23 04:50:58 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005532582 on np0005532582.localdomain Nov 23 04:50:58 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:50:58 localhost ceph-mon[293353]: Reconfiguring mon.np0005532582 (monmap changed)... Nov 23 04:50:58 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532582 on np0005532582.localdomain Nov 23 04:50:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s Nov 23 04:50:59 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005532583 (monmap changed)... Nov 23 04:50:59 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005532583 (monmap changed)... Nov 23 04:50:59 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005532583 on np0005532583.localdomain Nov 23 04:50:59 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005532583 on np0005532583.localdomain Nov 23 04:50:59 localhost nova_compute[280939]: 2025-11-23 09:50:59.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:51:00 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:00 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:00 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:51:00 localhost ceph-mon[293353]: Reconfiguring mon.np0005532583 (monmap changed)... Nov 23 04:51:00 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532583 on np0005532583.localdomain Nov 23 04:51:00 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:00 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005532586.localdomain Nov 23 04:51:00 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005532586.localdomain Nov 23 04:51:00 localhost systemd[1]: Stopping User Manager for UID 1003... Nov 23 04:51:00 localhost systemd[291341]: Activating special unit Exit the Session... Nov 23 04:51:00 localhost systemd[291341]: Stopped target Main User Target. Nov 23 04:51:00 localhost systemd[291341]: Stopped target Basic System. Nov 23 04:51:00 localhost systemd[291341]: Stopped target Paths. Nov 23 04:51:00 localhost systemd[291341]: Stopped target Sockets. Nov 23 04:51:00 localhost systemd[291341]: Stopped target Timers. Nov 23 04:51:00 localhost systemd[291341]: Stopped Mark boot as successful after the user session has run 2 minutes. Nov 23 04:51:00 localhost systemd[291341]: Stopped Daily Cleanup of User's Temporary Directories. Nov 23 04:51:00 localhost systemd[291341]: Closed D-Bus User Message Bus Socket. Nov 23 04:51:00 localhost systemd[291341]: Stopped Create User's Volatile Files and Directories. Nov 23 04:51:00 localhost systemd[291341]: Removed slice User Application Slice. Nov 23 04:51:00 localhost systemd[291341]: Reached target Shutdown. Nov 23 04:51:00 localhost systemd[291341]: Finished Exit the Session. Nov 23 04:51:00 localhost systemd[291341]: Reached target Exit the Session. Nov 23 04:51:00 localhost systemd[1]: user@1003.service: Deactivated successfully. Nov 23 04:51:00 localhost systemd[1]: Stopped User Manager for UID 1003. Nov 23 04:51:00 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Nov 23 04:51:00 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Nov 23 04:51:00 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Nov 23 04:51:00 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Nov 23 04:51:00 localhost systemd[1]: Removed slice User Slice of UID 1003. Nov 23 04:51:00 localhost systemd[1]: user-1003.slice: Consumed 2.282s CPU time. Nov 23 04:51:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s Nov 23 04:51:01 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005532586 (monmap changed)... Nov 23 04:51:01 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005532586 (monmap changed)... Nov 23 04:51:01 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005532586 on np0005532586.localdomain Nov 23 04:51:01 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005532586 on np0005532586.localdomain Nov 23 04:51:01 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:01 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 23 04:51:01 localhost ceph-mon[293353]: Reconfiguring daemon osd.4 on np0005532586.localdomain Nov 23 04:51:01 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:01 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:01 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:01 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:01 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:51:01 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 04:51:02 localhost ceph-mon[293353]: mon.np0005532584@4(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:51:02 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev 48bf3722-73b5-4d6a-91d2-9b2d3d31139a (Updating node-proxy deployment (+5 -> 5)) Nov 23 04:51:02 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev 48bf3722-73b5-4d6a-91d2-9b2d3d31139a (Updating node-proxy deployment (+5 -> 5)) Nov 23 04:51:02 localhost ceph-mgr[286671]: [progress INFO root] Completed event 48bf3722-73b5-4d6a-91d2-9b2d3d31139a (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Nov 23 04:51:02 localhost ceph-mon[293353]: Reconfiguring mon.np0005532586 (monmap changed)... Nov 23 04:51:02 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532586 on np0005532586.localdomain Nov 23 04:51:02 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:02 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:02 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:02 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:51:02 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:02 localhost nova_compute[280939]: 2025-11-23 09:51:02.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:51:02 localhost nova_compute[280939]: 2025-11-23 09:51:02.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:51:02 localhost nova_compute[280939]: 2025-11-23 09:51:02.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:51:02 localhost nova_compute[280939]: 2025-11-23 09:51:02.150 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:51:02 localhost nova_compute[280939]: 2025-11-23 09:51:02.151 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:51:02 localhost nova_compute[280939]: 2025-11-23 09:51:02.151 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:51:02 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.34343 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Nov 23 04:51:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Nov 23 04:51:03 localhost nova_compute[280939]: 2025-11-23 09:51:03.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:51:03 localhost nova_compute[280939]: 2025-11-23 09:51:03.169 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:51:03 localhost nova_compute[280939]: 2025-11-23 09:51:03.169 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:51:03 localhost nova_compute[280939]: 2025-11-23 09:51:03.170 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:51:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:51:03 localhost podman[296980]: 2025-11-23 09:51:03.902451555 +0000 UTC m=+0.084355180 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 23 04:51:03 localhost podman[296980]: 2025-11-23 09:51:03.907653585 +0000 UTC m=+0.089557230 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 23 04:51:03 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:51:04 localhost nova_compute[280939]: 2025-11-23 09:51:04.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:51:04 localhost nova_compute[280939]: 2025-11-23 09:51:04.156 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:51:04 localhost nova_compute[280939]: 2025-11-23 09:51:04.156 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:51:04 localhost nova_compute[280939]: 2025-11-23 09:51:04.157 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:51:04 localhost nova_compute[280939]: 2025-11-23 09:51:04.157 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:51:04 localhost nova_compute[280939]: 2025-11-23 09:51:04.157 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:51:04 localhost ceph-mon[293353]: mon.np0005532584@4(peon) e9 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:51:04 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/131716085' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:51:04 localhost nova_compute[280939]: 2025-11-23 09:51:04.603 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:51:04 localhost nova_compute[280939]: 2025-11-23 09:51:04.826 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:51:04 localhost nova_compute[280939]: 2025-11-23 09:51:04.827 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12348MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:51:04 localhost nova_compute[280939]: 2025-11-23 09:51:04.828 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:51:04 localhost nova_compute[280939]: 2025-11-23 09:51:04.829 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:51:04 localhost nova_compute[280939]: 2025-11-23 09:51:04.908 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:51:04 localhost nova_compute[280939]: 2025-11-23 09:51:04.909 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:51:04 localhost nova_compute[280939]: 2025-11-23 09:51:04.932 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:51:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Nov 23 04:51:05 localhost nova_compute[280939]: 2025-11-23 09:51:05.328 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:51:05 localhost nova_compute[280939]: 2025-11-23 09:51:05.333 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:51:05 localhost nova_compute[280939]: 2025-11-23 09:51:05.350 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:51:05 localhost nova_compute[280939]: 2025-11-23 09:51:05.352 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:51:05 localhost nova_compute[280939]: 2025-11-23 09:51:05.353 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:51:05 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.27196 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005532582", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Nov 23 04:51:06 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 04:51:06 localhost nova_compute[280939]: 2025-11-23 09:51:06.350 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:51:06 localhost openstack_network_exporter[241732]: ERROR 09:51:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:51:06 localhost openstack_network_exporter[241732]: ERROR 09:51:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:51:06 localhost openstack_network_exporter[241732]: ERROR 09:51:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:51:06 localhost openstack_network_exporter[241732]: ERROR 09:51:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:51:06 localhost openstack_network_exporter[241732]: Nov 23 04:51:06 localhost openstack_network_exporter[241732]: ERROR 09:51:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:51:06 localhost openstack_network_exporter[241732]: Nov 23 04:51:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Nov 23 04:51:07 localhost ceph-mon[293353]: mon.np0005532584@4(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:51:07 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:07 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.27206 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005532582"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Nov 23 04:51:07 localhost ceph-mgr[286671]: [cephadm INFO root] Remove daemons mon.np0005532582 Nov 23 04:51:07 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005532582 Nov 23 04:51:07 localhost ceph-mgr[286671]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005532582: new quorum should be ['np0005532583', 'np0005532586', 'np0005532585', 'np0005532584'] (from ['np0005532583', 'np0005532586', 'np0005532585', 'np0005532584']) Nov 23 04:51:07 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005532582: new quorum should be ['np0005532583', 'np0005532586', 'np0005532585', 'np0005532584'] (from ['np0005532583', 'np0005532586', 'np0005532585', 'np0005532584']) Nov 23 04:51:07 localhost ceph-mgr[286671]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005532582 from monmap... Nov 23 04:51:07 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Removing monitor np0005532582 from monmap... Nov 23 04:51:07 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005532582 from np0005532582.localdomain -- ports [] Nov 23 04:51:07 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005532582 from np0005532582.localdomain -- ports [] Nov 23 04:51:07 localhost ceph-mgr[286671]: client.34328 ms_handle_reset on v2:172.18.0.107:3300/0 Nov 23 04:51:07 localhost ceph-mon[293353]: mon.np0005532584@4(peon) e10 my rank is now 3 (was 4) Nov 23 04:51:07 localhost ceph-mgr[286671]: client.44194 ms_handle_reset on v2:172.18.0.103:3300/0 Nov 23 04:51:07 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : mon.np0005532584 calling monitor election Nov 23 04:51:07 localhost ceph-mon[293353]: paxos.3).electionLogic(40) init, last seen epoch 40 Nov 23 04:51:07 localhost ceph-mon[293353]: mon.np0005532584@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:51:07 localhost ceph-mon[293353]: mon.np0005532584@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:51:08 localhost ceph-mon[293353]: mon.np0005532584@3(electing) e10 handle_auth_request failed to assign global_id Nov 23 04:51:08 localhost nova_compute[280939]: 2025-11-23 09:51:08.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:51:08 localhost ceph-mon[293353]: mon.np0005532584@3(electing) e10 handle_auth_request failed to assign global_id Nov 23 04:51:08 localhost ceph-mon[293353]: mon.np0005532584@3(electing) e10 handle_auth_request failed to assign global_id Nov 23 04:51:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:51:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:51:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:51:08 localhost podman[297042]: 2025-11-23 09:51:08.900475158 +0000 UTC m=+0.086862077 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true) Nov 23 04:51:08 localhost podman[297042]: 2025-11-23 09:51:08.914369408 +0000 UTC m=+0.100756287 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 04:51:08 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:51:09 localhost podman[297044]: 2025-11-23 09:51:09.006224037 +0000 UTC m=+0.185920359 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 23 04:51:09 localhost podman[297043]: 2025-11-23 09:51:09.065589453 +0000 UTC m=+0.248498975 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:51:09 localhost podman[297044]: 2025-11-23 09:51:09.073493757 +0000 UTC m=+0.253190059 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118) Nov 23 04:51:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Nov 23 04:51:09 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:51:09 localhost podman[297043]: 2025-11-23 09:51:09.130292414 +0000 UTC m=+0.313201906 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:51:09 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:51:09 localhost ceph-mon[293353]: mon.np0005532584@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:51:09 localhost ceph-mon[293353]: mon.np0005532584@3(peon) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:51:09 localhost ceph-mon[293353]: Remove daemons mon.np0005532582 Nov 23 04:51:09 localhost ceph-mon[293353]: Safe to remove mon.np0005532582: new quorum should be ['np0005532583', 'np0005532586', 'np0005532585', 'np0005532584'] (from ['np0005532583', 'np0005532586', 'np0005532585', 'np0005532584']) Nov 23 04:51:09 localhost ceph-mon[293353]: Removing monitor np0005532582 from monmap... Nov 23 04:51:09 localhost ceph-mon[293353]: Removing daemon mon.np0005532582 from np0005532582.localdomain -- ports [] Nov 23 04:51:09 localhost ceph-mon[293353]: mon.np0005532586 calling monitor election Nov 23 04:51:09 localhost ceph-mon[293353]: mon.np0005532584 calling monitor election Nov 23 04:51:09 localhost ceph-mon[293353]: mon.np0005532585 calling monitor election Nov 23 04:51:09 localhost ceph-mon[293353]: mon.np0005532583 calling monitor election Nov 23 04:51:09 localhost ceph-mon[293353]: mon.np0005532583 is new leader, mons np0005532583,np0005532586,np0005532585,np0005532584 in quorum (ranks 0,1,2,3) Nov 23 04:51:09 localhost ceph-mon[293353]: overall HEALTH_OK Nov 23 04:51:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:51:09.733 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:51:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:51:09.733 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:51:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:51:09.735 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:51:09 localhost systemd[1]: tmp-crun.8sxGYQ.mount: Deactivated successfully. Nov 23 04:51:10 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.34349 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005532582.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch Nov 23 04:51:10 localhost ceph-mgr[286671]: [cephadm INFO root] Removed label mon from host np0005532582.localdomain Nov 23 04:51:10 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Removed label mon from host np0005532582.localdomain Nov 23 04:51:10 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532582.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:10 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532582.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:10 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532583.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:10 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:10 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532583.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:10 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:10 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:10 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:10 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:10 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:11 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:11 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:11 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:11 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:11 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:11 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:11 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:11 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:11 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:11 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:11 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:11 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:11 localhost ceph-mon[293353]: Removed label mon from host np0005532582.localdomain Nov 23 04:51:11 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:11 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:51:11 localhost ceph-mon[293353]: Updating np0005532582.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:11 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:11 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:11 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:11 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:11 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.34386 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005532582.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch Nov 23 04:51:11 localhost ceph-mgr[286671]: [cephadm INFO root] Removed label mgr from host np0005532582.localdomain Nov 23 04:51:11 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Removed label mgr from host np0005532582.localdomain Nov 23 04:51:12 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev cff487df-263f-41c3-bc2a-7004541230df (Updating mgr deployment (-1 -> 4)) Nov 23 04:51:12 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Removing daemon mgr.np0005532582.gilwrz from np0005532582.localdomain -- ports [8765] Nov 23 04:51:12 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Removing daemon mgr.np0005532582.gilwrz from np0005532582.localdomain -- ports [8765] Nov 23 04:51:12 localhost ceph-mon[293353]: mon.np0005532584@3(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.575 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:51:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:51:12 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:12 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:12 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:12 localhost ceph-mon[293353]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:12 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:12 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:12 localhost ceph-mon[293353]: Removed label mgr from host np0005532582.localdomain Nov 23 04:51:12 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:12 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:12 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:12 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:12 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:12 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:12 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:12 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:12 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:12 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:12 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:12 localhost ceph-mon[293353]: Removing daemon mgr.np0005532582.gilwrz from np0005532582.localdomain -- ports [8765] Nov 23 04:51:13 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.34357 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005532582.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch Nov 23 04:51:13 localhost ceph-mgr[286671]: [cephadm INFO root] Removed label _admin from host np0005532582.localdomain Nov 23 04:51:13 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Removed label _admin from host np0005532582.localdomain Nov 23 04:51:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:14 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:14 localhost ceph-mon[293353]: Removed label _admin from host np0005532582.localdomain Nov 23 04:51:14 localhost ceph-mgr[286671]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.np0005532582.gilwrz Nov 23 04:51:14 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Removing key for mgr.np0005532582.gilwrz Nov 23 04:51:14 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev cff487df-263f-41c3-bc2a-7004541230df (Updating mgr deployment (-1 -> 4)) Nov 23 04:51:14 localhost ceph-mgr[286671]: [progress INFO root] Completed event cff487df-263f-41c3-bc2a-7004541230df (Updating mgr deployment (-1 -> 4)) in 2 seconds Nov 23 04:51:14 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev a4e1b5f9-5816-4683-8d68-754cbd6a0cee (Updating node-proxy deployment (+5 -> 5)) Nov 23 04:51:14 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev a4e1b5f9-5816-4683-8d68-754cbd6a0cee (Updating node-proxy deployment (+5 -> 5)) Nov 23 04:51:14 localhost ceph-mgr[286671]: [progress INFO root] Completed event a4e1b5f9-5816-4683-8d68-754cbd6a0cee (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Nov 23 04:51:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:15 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "mgr.np0005532582.gilwrz"} : dispatch Nov 23 04:51:15 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005532582.gilwrz"}]': finished Nov 23 04:51:15 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:15 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:15 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Removing np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:15 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Removing np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:16 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Removing np0005532582.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:51:16 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Removing np0005532582.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:51:16 localhost ceph-mon[293353]: Removing key for mgr.np0005532582.gilwrz Nov 23 04:51:16 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:16 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:16 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:51:16 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Removing np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:51:16 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Removing np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:51:16 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 04:51:16 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev fef4f7bd-5cc3-45b5-8062-0b0fa8e08f19 (Updating node-proxy deployment (+5 -> 5)) Nov 23 04:51:16 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev fef4f7bd-5cc3-45b5-8062-0b0fa8e08f19 (Updating node-proxy deployment (+5 -> 5)) Nov 23 04:51:16 localhost ceph-mgr[286671]: [progress INFO root] Completed event fef4f7bd-5cc3-45b5-8062-0b0fa8e08f19 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Nov 23 04:51:16 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005532582 (monmap changed)... Nov 23 04:51:16 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005532582 (monmap changed)... Nov 23 04:51:16 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005532582 on np0005532582.localdomain Nov 23 04:51:16 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005532582 on np0005532582.localdomain Nov 23 04:51:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:17 localhost podman[239764]: time="2025-11-23T09:51:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:51:17 localhost ceph-mon[293353]: mon.np0005532584@3(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:51:17 localhost podman[239764]: @ - - [23/Nov/2025:09:51:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:51:17 localhost ceph-mon[293353]: Removing np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:17 localhost ceph-mon[293353]: Removing np0005532582.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:51:17 localhost ceph-mon[293353]: Removing np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:51:17 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:17 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:17 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:17 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:17 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532582.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:51:17 localhost podman[239764]: @ - - [23/Nov/2025:09:51:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18215 "" "Go-http-client/1.1" Nov 23 04:51:18 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005532583 (monmap changed)... Nov 23 04:51:18 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005532583 (monmap changed)... Nov 23 04:51:18 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005532583 on np0005532583.localdomain Nov 23 04:51:18 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005532583 on np0005532583.localdomain Nov 23 04:51:18 localhost ceph-mon[293353]: Reconfiguring crash.np0005532582 (monmap changed)... Nov 23 04:51:18 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532582 on np0005532582.localdomain Nov 23 04:51:18 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:18 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:18 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:51:18 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005532583.orhywt (monmap changed)... Nov 23 04:51:18 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005532583.orhywt (monmap changed)... Nov 23 04:51:18 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005532583.orhywt on np0005532583.localdomain Nov 23 04:51:18 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005532583.orhywt on np0005532583.localdomain Nov 23 04:51:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:19 localhost ceph-mon[293353]: Reconfiguring mon.np0005532583 (monmap changed)... Nov 23 04:51:19 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532583 on np0005532583.localdomain Nov 23 04:51:19 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:19 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:19 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532583.orhywt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:51:19 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005532583 (monmap changed)... Nov 23 04:51:19 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005532583 (monmap changed)... Nov 23 04:51:19 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain Nov 23 04:51:19 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain Nov 23 04:51:20 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532583.orhywt (monmap changed)... Nov 23 04:51:20 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532583.orhywt on np0005532583.localdomain Nov 23 04:51:20 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:20 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:20 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:51:20 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005532584 (monmap changed)... Nov 23 04:51:20 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005532584 (monmap changed)... Nov 23 04:51:20 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain Nov 23 04:51:20 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain Nov 23 04:51:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:51:20 localhost podman[297484]: 2025-11-23 09:51:20.906923037 +0000 UTC m=+0.090319904 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 04:51:20 localhost podman[297484]: 2025-11-23 09:51:20.947017437 +0000 UTC m=+0.130414374 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git) Nov 23 04:51:20 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:51:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:21 localhost ceph-mon[293353]: Reconfiguring crash.np0005532583 (monmap changed)... Nov 23 04:51:21 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain Nov 23 04:51:21 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:21 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:21 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:51:21 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:51:21 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:51:21 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:51:21 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:51:21 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:51:21 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:51:21 localhost podman[297541]: Nov 23 04:51:21 localhost podman[297541]: 2025-11-23 09:51:21.356020923 +0000 UTC m=+0.076673972 container create 11ee82ecbe642972b098133d76a663c9a659398a57fc06bc2124eb558b18ad94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_hermann, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vcs-type=git, release=553, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, version=7, ceph=True, architecture=x86_64, build-date=2025-09-24T08:57:55, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, distribution-scope=public) Nov 23 04:51:21 localhost systemd[1]: Started libpod-conmon-11ee82ecbe642972b098133d76a663c9a659398a57fc06bc2124eb558b18ad94.scope. Nov 23 04:51:21 localhost systemd[1]: Started libcrun container. Nov 23 04:51:21 localhost podman[297541]: 2025-11-23 09:51:21.324335334 +0000 UTC m=+0.044988433 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:51:21 localhost podman[297541]: 2025-11-23 09:51:21.424798249 +0000 UTC m=+0.145451298 container init 11ee82ecbe642972b098133d76a663c9a659398a57fc06bc2124eb558b18ad94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_hermann, CEPH_POINT_RELEASE=, vcs-type=git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, RELEASE=main, io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.buildah.version=1.33.12, ceph=True) Nov 23 04:51:21 localhost podman[297541]: 2025-11-23 09:51:21.433639294 +0000 UTC m=+0.154292333 container start 11ee82ecbe642972b098133d76a663c9a659398a57fc06bc2124eb558b18ad94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_hermann, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main) Nov 23 04:51:21 localhost podman[297541]: 2025-11-23 09:51:21.433903052 +0000 UTC m=+0.154556111 container attach 11ee82ecbe642972b098133d76a663c9a659398a57fc06bc2124eb558b18ad94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_hermann, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_CLEAN=True, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-type=git, architecture=x86_64, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, release=553, name=rhceph) Nov 23 04:51:21 localhost xenodochial_hermann[297556]: 167 167 Nov 23 04:51:21 localhost systemd[1]: libpod-11ee82ecbe642972b098133d76a663c9a659398a57fc06bc2124eb558b18ad94.scope: Deactivated successfully. Nov 23 04:51:21 localhost podman[297541]: 2025-11-23 09:51:21.437266515 +0000 UTC m=+0.157919594 container died 11ee82ecbe642972b098133d76a663c9a659398a57fc06bc2124eb558b18ad94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_hermann, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, vcs-type=git, version=7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_BRANCH=main) Nov 23 04:51:21 localhost podman[297561]: 2025-11-23 09:51:21.525954388 +0000 UTC m=+0.076886939 container remove 11ee82ecbe642972b098133d76a663c9a659398a57fc06bc2124eb558b18ad94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_hermann, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, version=7, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553) Nov 23 04:51:21 localhost systemd[1]: libpod-conmon-11ee82ecbe642972b098133d76a663c9a659398a57fc06bc2124eb558b18ad94.scope: Deactivated successfully. Nov 23 04:51:21 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Nov 23 04:51:21 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Nov 23 04:51:21 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:51:21 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:51:21 localhost systemd[1]: var-lib-containers-storage-overlay-3abae1b6798ae308a8aa34480156f6188f594d76ff441a4f0f02a7f586fdcb08-merged.mount: Deactivated successfully. Nov 23 04:51:22 localhost ceph-mon[293353]: mon.np0005532584@3(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:51:22 localhost ceph-mon[293353]: Reconfiguring crash.np0005532584 (monmap changed)... Nov 23 04:51:22 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain Nov 23 04:51:22 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:22 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:22 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 23 04:51:22 localhost podman[297630]: Nov 23 04:51:22 localhost podman[297630]: 2025-11-23 09:51:22.219367248 +0000 UTC m=+0.079910011 container create 8624a9eb4052cb27acf465ba6a053e245ef831e36c387456cadc6523fe1f492b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_hodgkin, description=Red Hat Ceph Storage 7, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., name=rhceph, version=7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, RELEASE=main, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, release=553, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64) Nov 23 04:51:22 localhost systemd[1]: Started libpod-conmon-8624a9eb4052cb27acf465ba6a053e245ef831e36c387456cadc6523fe1f492b.scope. Nov 23 04:51:22 localhost systemd[1]: Started libcrun container. Nov 23 04:51:22 localhost podman[297630]: 2025-11-23 09:51:22.280073976 +0000 UTC m=+0.140616739 container init 8624a9eb4052cb27acf465ba6a053e245ef831e36c387456cadc6523fe1f492b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_hodgkin, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public, RELEASE=main, release=553, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, name=rhceph, description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 04:51:22 localhost podman[297630]: 2025-11-23 09:51:22.188415381 +0000 UTC m=+0.048958184 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:51:22 localhost podman[297630]: 2025-11-23 09:51:22.292864671 +0000 UTC m=+0.153407434 container start 8624a9eb4052cb27acf465ba6a053e245ef831e36c387456cadc6523fe1f492b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_hodgkin, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vcs-type=git, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, version=7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, release=553, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph) Nov 23 04:51:22 localhost beautiful_hodgkin[297645]: 167 167 Nov 23 04:51:22 localhost systemd[1]: libpod-8624a9eb4052cb27acf465ba6a053e245ef831e36c387456cadc6523fe1f492b.scope: Deactivated successfully. Nov 23 04:51:22 localhost podman[297630]: 2025-11-23 09:51:22.293200251 +0000 UTC m=+0.153743054 container attach 8624a9eb4052cb27acf465ba6a053e245ef831e36c387456cadc6523fe1f492b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_hodgkin, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, architecture=x86_64, vendor=Red Hat, Inc., RELEASE=main, release=553, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, ceph=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , distribution-scope=public, version=7, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:51:22 localhost podman[297630]: 2025-11-23 09:51:22.29900746 +0000 UTC m=+0.159550223 container died 8624a9eb4052cb27acf465ba6a053e245ef831e36c387456cadc6523fe1f492b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_hodgkin, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, release=553, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.buildah.version=1.33.12, name=rhceph, io.openshift.expose-services=) Nov 23 04:51:22 localhost podman[297650]: 2025-11-23 09:51:22.390938403 +0000 UTC m=+0.081428709 container remove 8624a9eb4052cb27acf465ba6a053e245ef831e36c387456cadc6523fe1f492b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_hodgkin, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, RELEASE=main, distribution-scope=public, release=553, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:51:22 localhost systemd[1]: libpod-conmon-8624a9eb4052cb27acf465ba6a053e245ef831e36c387456cadc6523fe1f492b.scope: Deactivated successfully. Nov 23 04:51:22 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Nov 23 04:51:22 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Nov 23 04:51:22 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:51:22 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:51:22 localhost systemd[1]: tmp-crun.Dj074r.mount: Deactivated successfully. Nov 23 04:51:22 localhost systemd[1]: var-lib-containers-storage-overlay-74d8bc26f6a841a4d2f014e3ce9453ba3c408f7bfbfc266445382e11decb4837-merged.mount: Deactivated successfully. Nov 23 04:51:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:23 localhost ceph-mon[293353]: Reconfiguring osd.2 (monmap changed)... Nov 23 04:51:23 localhost ceph-mon[293353]: Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:51:23 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:23 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:23 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 23 04:51:23 localhost ceph-mon[293353]: Reconfiguring osd.5 (monmap changed)... Nov 23 04:51:23 localhost ceph-mon[293353]: Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:51:23 localhost podman[297727]: Nov 23 04:51:23 localhost podman[297727]: 2025-11-23 09:51:23.195962126 +0000 UTC m=+0.079718187 container create 0fc46e3806ca416fcb0e8740dd90f0e7293fe0b1d98fa6edf936af1dab1f7880 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_almeida, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, release=553, distribution-scope=public, GIT_BRANCH=main, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, name=rhceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:51:23 localhost systemd[1]: Started libpod-conmon-0fc46e3806ca416fcb0e8740dd90f0e7293fe0b1d98fa6edf936af1dab1f7880.scope. Nov 23 04:51:23 localhost systemd[1]: Started libcrun container. Nov 23 04:51:23 localhost podman[297727]: 2025-11-23 09:51:23.163917944 +0000 UTC m=+0.047674025 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:51:23 localhost podman[297727]: 2025-11-23 09:51:23.273188333 +0000 UTC m=+0.156944394 container init 0fc46e3806ca416fcb0e8740dd90f0e7293fe0b1d98fa6edf936af1dab1f7880 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_almeida, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , name=rhceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, release=553, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:51:23 localhost podman[297727]: 2025-11-23 09:51:23.281158629 +0000 UTC m=+0.164914690 container start 0fc46e3806ca416fcb0e8740dd90f0e7293fe0b1d98fa6edf936af1dab1f7880 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_almeida, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, CEPH_POINT_RELEASE=, version=7, vcs-type=git, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_CLEAN=True, ceph=True, release=553, distribution-scope=public, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, architecture=x86_64) Nov 23 04:51:23 localhost determined_almeida[297741]: 167 167 Nov 23 04:51:23 localhost podman[297727]: 2025-11-23 09:51:23.283053698 +0000 UTC m=+0.166809789 container attach 0fc46e3806ca416fcb0e8740dd90f0e7293fe0b1d98fa6edf936af1dab1f7880 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_almeida, name=rhceph, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, build-date=2025-09-24T08:57:55, ceph=True, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:51:23 localhost systemd[1]: libpod-0fc46e3806ca416fcb0e8740dd90f0e7293fe0b1d98fa6edf936af1dab1f7880.scope: Deactivated successfully. Nov 23 04:51:23 localhost podman[297727]: 2025-11-23 09:51:23.28535742 +0000 UTC m=+0.169113471 container died 0fc46e3806ca416fcb0e8740dd90f0e7293fe0b1d98fa6edf936af1dab1f7880 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_almeida, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, release=553, build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.openshift.tags=rhceph ceph) Nov 23 04:51:23 localhost podman[297746]: 2025-11-23 09:51:23.377817269 +0000 UTC m=+0.081293645 container remove 0fc46e3806ca416fcb0e8740dd90f0e7293fe0b1d98fa6edf936af1dab1f7880 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_almeida, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , release=553, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, ceph=True, build-date=2025-09-24T08:57:55, name=rhceph, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:51:23 localhost systemd[1]: libpod-conmon-0fc46e3806ca416fcb0e8740dd90f0e7293fe0b1d98fa6edf936af1dab1f7880.scope: Deactivated successfully. Nov 23 04:51:23 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)... Nov 23 04:51:23 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)... Nov 23 04:51:23 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain Nov 23 04:51:23 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain Nov 23 04:51:23 localhost systemd[1]: var-lib-containers-storage-overlay-3734b8e34daceed22bedd40eab05716550b00b01ece5d479e98ef662604cca08-merged.mount: Deactivated successfully. Nov 23 04:51:24 localhost podman[297822]: Nov 23 04:51:24 localhost podman[297822]: 2025-11-23 09:51:24.270192001 +0000 UTC m=+0.073332838 container create 0f2d684d2bf403fd0b84a198de064544856d752336a6fc13463646b4154c9018 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_thompson, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, build-date=2025-09-24T08:57:55, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, name=rhceph, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, ceph=True, io.buildah.version=1.33.12, vendor=Red Hat, Inc., release=553, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7) Nov 23 04:51:24 localhost systemd[1]: Started libpod-conmon-0f2d684d2bf403fd0b84a198de064544856d752336a6fc13463646b4154c9018.scope. Nov 23 04:51:24 localhost systemd[1]: Started libcrun container. Nov 23 04:51:24 localhost podman[297822]: 2025-11-23 09:51:24.337129081 +0000 UTC m=+0.140269918 container init 0f2d684d2bf403fd0b84a198de064544856d752336a6fc13463646b4154c9018 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_thompson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, distribution-scope=public, ceph=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, architecture=x86_64, version=7, io.openshift.tags=rhceph ceph) Nov 23 04:51:24 localhost podman[297822]: 2025-11-23 09:51:24.240127041 +0000 UTC m=+0.043267918 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:51:24 localhost podman[297822]: 2025-11-23 09:51:24.346889732 +0000 UTC m=+0.150030569 container start 0f2d684d2bf403fd0b84a198de064544856d752336a6fc13463646b4154c9018 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_thompson, name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:51:24 localhost podman[297822]: 2025-11-23 09:51:24.347158011 +0000 UTC m=+0.150298888 container attach 0f2d684d2bf403fd0b84a198de064544856d752336a6fc13463646b4154c9018 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_thompson, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, CEPH_POINT_RELEASE=) Nov 23 04:51:24 localhost eager_thompson[297837]: 167 167 Nov 23 04:51:24 localhost systemd[1]: libpod-0f2d684d2bf403fd0b84a198de064544856d752336a6fc13463646b4154c9018.scope: Deactivated successfully. Nov 23 04:51:24 localhost podman[297822]: 2025-11-23 09:51:24.349974458 +0000 UTC m=+0.153115365 container died 0f2d684d2bf403fd0b84a198de064544856d752336a6fc13463646b4154c9018 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_thompson, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, ceph=True, io.buildah.version=1.33.12, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:51:24 localhost podman[297842]: 2025-11-23 09:51:24.444510851 +0000 UTC m=+0.081780310 container remove 0f2d684d2bf403fd0b84a198de064544856d752336a6fc13463646b4154c9018 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_thompson, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , RELEASE=main, release=553, version=7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_BRANCH=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., distribution-scope=public, name=rhceph) Nov 23 04:51:24 localhost systemd[1]: libpod-conmon-0f2d684d2bf403fd0b84a198de064544856d752336a6fc13463646b4154c9018.scope: Deactivated successfully. Nov 23 04:51:24 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005532584.naxwxy (monmap changed)... Nov 23 04:51:24 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005532584.naxwxy (monmap changed)... Nov 23 04:51:24 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain Nov 23 04:51:24 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain Nov 23 04:51:24 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:24 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:24 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:51:24 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)... Nov 23 04:51:24 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain Nov 23 04:51:24 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:24 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:24 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:51:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:51:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:51:24 localhost podman[297885]: 2025-11-23 09:51:24.73423398 +0000 UTC m=+0.078282692 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:51:24 localhost podman[297885]: 2025-11-23 09:51:24.743918309 +0000 UTC m=+0.087967101 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:51:24 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:51:24 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.34390 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005532582.localdomain", "target": ["mon-mgr", ""]}]: dispatch Nov 23 04:51:24 localhost ceph-mgr[286671]: [cephadm INFO root] Added label _no_schedule to host np0005532582.localdomain Nov 23 04:51:24 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Added label _no_schedule to host np0005532582.localdomain Nov 23 04:51:24 localhost ceph-mgr[286671]: [cephadm INFO root] Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005532582.localdomain Nov 23 04:51:24 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005532582.localdomain Nov 23 04:51:24 localhost podman[297881]: 2025-11-23 09:51:24.801281583 +0000 UTC m=+0.144919962 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2) Nov 23 04:51:24 localhost podman[297881]: 2025-11-23 09:51:24.811360654 +0000 UTC m=+0.154999063 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:51:24 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:51:24 localhost systemd[1]: var-lib-containers-storage-overlay-1b76dded510fe8b45a2bf17aa973dc5b817896edc7dc40dc80f4e8675f4ad4d1-merged.mount: Deactivated successfully. Nov 23 04:51:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:25 localhost podman[297955]: Nov 23 04:51:25 localhost podman[297955]: 2025-11-23 09:51:25.14176561 +0000 UTC m=+0.073600386 container create 2da7228f9636c988ae498f7141889a0e3247f2ff8e94a97219e6b259dba2d2cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_dewdney, ceph=True, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, architecture=x86_64, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, version=7, build-date=2025-09-24T08:57:55, vcs-type=git, release=553) Nov 23 04:51:25 localhost systemd[1]: Started libpod-conmon-2da7228f9636c988ae498f7141889a0e3247f2ff8e94a97219e6b259dba2d2cf.scope. Nov 23 04:51:25 localhost systemd[1]: Started libcrun container. Nov 23 04:51:25 localhost podman[297955]: 2025-11-23 09:51:25.203942123 +0000 UTC m=+0.135776909 container init 2da7228f9636c988ae498f7141889a0e3247f2ff8e94a97219e6b259dba2d2cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_dewdney, distribution-scope=public, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, release=553, name=rhceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.buildah.version=1.33.12) Nov 23 04:51:25 localhost podman[297955]: 2025-11-23 09:51:25.111076541 +0000 UTC m=+0.042911327 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:51:25 localhost podman[297955]: 2025-11-23 09:51:25.21420186 +0000 UTC m=+0.146036636 container start 2da7228f9636c988ae498f7141889a0e3247f2ff8e94a97219e6b259dba2d2cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_dewdney, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, release=553) Nov 23 04:51:25 localhost podman[297955]: 2025-11-23 09:51:25.216718158 +0000 UTC m=+0.148552944 container attach 2da7228f9636c988ae498f7141889a0e3247f2ff8e94a97219e6b259dba2d2cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_dewdney, release=553, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, vcs-type=git, RELEASE=main, distribution-scope=public, version=7, GIT_BRANCH=main, ceph=True, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=) Nov 23 04:51:25 localhost nervous_dewdney[297970]: 167 167 Nov 23 04:51:25 localhost systemd[1]: libpod-2da7228f9636c988ae498f7141889a0e3247f2ff8e94a97219e6b259dba2d2cf.scope: Deactivated successfully. Nov 23 04:51:25 localhost podman[297955]: 2025-11-23 09:51:25.220229127 +0000 UTC m=+0.152063913 container died 2da7228f9636c988ae498f7141889a0e3247f2ff8e94a97219e6b259dba2d2cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_dewdney, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, RELEASE=main, release=553, GIT_BRANCH=main) Nov 23 04:51:25 localhost podman[297975]: 2025-11-23 09:51:25.310830399 +0000 UTC m=+0.082141762 container remove 2da7228f9636c988ae498f7141889a0e3247f2ff8e94a97219e6b259dba2d2cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_dewdney, description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, version=7, vendor=Red Hat, Inc., release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, RELEASE=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:51:25 localhost systemd[1]: libpod-conmon-2da7228f9636c988ae498f7141889a0e3247f2ff8e94a97219e6b259dba2d2cf.scope: Deactivated successfully. Nov 23 04:51:25 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005532584 (monmap changed)... Nov 23 04:51:25 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005532584 (monmap changed)... Nov 23 04:51:25 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain Nov 23 04:51:25 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain Nov 23 04:51:25 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)... Nov 23 04:51:25 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain Nov 23 04:51:25 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:25 localhost ceph-mon[293353]: Added label _no_schedule to host np0005532582.localdomain Nov 23 04:51:25 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:25 localhost ceph-mon[293353]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005532582.localdomain Nov 23 04:51:25 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:25 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:25 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:51:25 localhost systemd[1]: var-lib-containers-storage-overlay-8468056de33ed9a8926c1bdc628ea626b8c06bc3c9cbfda5acbaea1d28fcfe58-merged.mount: Deactivated successfully. Nov 23 04:51:26 localhost podman[298043]: Nov 23 04:51:26 localhost podman[298043]: 2025-11-23 09:51:26.024805824 +0000 UTC m=+0.064303999 container create 64e38fd9e91fd676d0c09ee74972602e2f0167af41735d30926a210994ad8256 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_moore, distribution-scope=public, RELEASE=main, io.buildah.version=1.33.12, ceph=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=553, name=rhceph, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.expose-services=, version=7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:51:26 localhost systemd[1]: Started libpod-conmon-64e38fd9e91fd676d0c09ee74972602e2f0167af41735d30926a210994ad8256.scope. Nov 23 04:51:26 localhost systemd[1]: Started libcrun container. Nov 23 04:51:26 localhost podman[298043]: 2025-11-23 09:51:26.090438884 +0000 UTC m=+0.129937119 container init 64e38fd9e91fd676d0c09ee74972602e2f0167af41735d30926a210994ad8256 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_moore, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, release=553, version=7, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, name=rhceph, description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, architecture=x86_64, io.buildah.version=1.33.12) Nov 23 04:51:26 localhost podman[298043]: 2025-11-23 09:51:26.09968925 +0000 UTC m=+0.139187475 container start 64e38fd9e91fd676d0c09ee74972602e2f0167af41735d30926a210994ad8256 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_moore, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , release=553, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, version=7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.buildah.version=1.33.12, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph) Nov 23 04:51:26 localhost podman[298043]: 2025-11-23 09:51:26.10000561 +0000 UTC m=+0.139503825 container attach 64e38fd9e91fd676d0c09ee74972602e2f0167af41735d30926a210994ad8256 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_moore, io.openshift.expose-services=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, release=553, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=rhceph-container) Nov 23 04:51:26 localhost gifted_moore[298058]: 167 167 Nov 23 04:51:26 localhost systemd[1]: libpod-64e38fd9e91fd676d0c09ee74972602e2f0167af41735d30926a210994ad8256.scope: Deactivated successfully. Nov 23 04:51:26 localhost podman[298043]: 2025-11-23 09:51:26.102719314 +0000 UTC m=+0.142217519 container died 64e38fd9e91fd676d0c09ee74972602e2f0167af41735d30926a210994ad8256 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_moore, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, ceph=True, GIT_BRANCH=main, version=7, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container) Nov 23 04:51:26 localhost podman[298043]: 2025-11-23 09:51:26.005472918 +0000 UTC m=+0.044971153 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:51:26 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.34394 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005532582.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Nov 23 04:51:26 localhost podman[298063]: 2025-11-23 09:51:26.193851412 +0000 UTC m=+0.083224134 container remove 64e38fd9e91fd676d0c09ee74972602e2f0167af41735d30926a210994ad8256 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_moore, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, architecture=x86_64, ceph=True, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:51:26 localhost systemd[1]: libpod-conmon-64e38fd9e91fd676d0c09ee74972602e2f0167af41735d30926a210994ad8256.scope: Deactivated successfully. Nov 23 04:51:26 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005532585 (monmap changed)... Nov 23 04:51:26 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005532585 (monmap changed)... Nov 23 04:51:26 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain Nov 23 04:51:26 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain Nov 23 04:51:26 localhost systemd[1]: var-lib-containers-storage-overlay-ae0e4007d3980f23549374774d0c12af929db26dceee794c9b826dd3d82f3197-merged.mount: Deactivated successfully. Nov 23 04:51:26 localhost ceph-mon[293353]: Reconfiguring mon.np0005532584 (monmap changed)... Nov 23 04:51:26 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain Nov 23 04:51:26 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:26 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:26 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:51:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:27 localhost ceph-mon[293353]: mon.np0005532584@3(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:51:27 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Nov 23 04:51:27 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Nov 23 04:51:27 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005532585.localdomain Nov 23 04:51:27 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005532585.localdomain Nov 23 04:51:27 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.26928 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005532582.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch Nov 23 04:51:27 localhost ceph-mgr[286671]: [cephadm INFO root] Removed host np0005532582.localdomain Nov 23 04:51:27 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Removed host np0005532582.localdomain Nov 23 04:51:28 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Nov 23 04:51:28 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Nov 23 04:51:28 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005532585.localdomain Nov 23 04:51:28 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005532585.localdomain Nov 23 04:51:28 localhost ceph-mon[293353]: Reconfiguring crash.np0005532585 (monmap changed)... Nov 23 04:51:28 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain Nov 23 04:51:28 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:28 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:28 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 23 04:51:28 localhost ceph-mon[293353]: Reconfiguring osd.0 (monmap changed)... Nov 23 04:51:28 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:28 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532582.localdomain"} : dispatch Nov 23 04:51:28 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532582.localdomain"}]': finished Nov 23 04:51:28 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:28 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:28 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 23 04:51:29 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)... Nov 23 04:51:29 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)... Nov 23 04:51:29 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain Nov 23 04:51:29 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain Nov 23 04:51:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:29 localhost ceph-mon[293353]: Reconfiguring daemon osd.0 on np0005532585.localdomain Nov 23 04:51:29 localhost ceph-mon[293353]: Removed host np0005532582.localdomain Nov 23 04:51:29 localhost ceph-mon[293353]: Reconfiguring osd.3 (monmap changed)... Nov 23 04:51:29 localhost ceph-mon[293353]: Reconfiguring daemon osd.3 on np0005532585.localdomain Nov 23 04:51:29 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:29 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:29 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:51:29 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005532585.gzafiw (monmap changed)... Nov 23 04:51:29 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005532585.gzafiw (monmap changed)... Nov 23 04:51:29 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain Nov 23 04:51:29 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain Nov 23 04:51:30 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)... Nov 23 04:51:30 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain Nov 23 04:51:30 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:30 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:30 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:51:30 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005532585 (monmap changed)... Nov 23 04:51:30 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005532585 (monmap changed)... Nov 23 04:51:30 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005532585 on np0005532585.localdomain Nov 23 04:51:30 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005532585 on np0005532585.localdomain Nov 23 04:51:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:31 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)... Nov 23 04:51:31 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain Nov 23 04:51:31 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:31 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:31 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:51:31 localhost ceph-mon[293353]: Reconfiguring mon.np0005532585 (monmap changed)... Nov 23 04:51:31 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532585 on np0005532585.localdomain Nov 23 04:51:31 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005532586 (monmap changed)... Nov 23 04:51:31 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005532586 (monmap changed)... Nov 23 04:51:31 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain Nov 23 04:51:31 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain Nov 23 04:51:32 localhost ceph-mon[293353]: mon.np0005532584@3(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:51:32 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Nov 23 04:51:32 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Nov 23 04:51:32 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005532586.localdomain Nov 23 04:51:32 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005532586.localdomain Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:51:32.607757) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891492607827, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2584, "num_deletes": 255, "total_data_size": 8106578, "memory_usage": 8660592, "flush_reason": "Manual Compaction"} Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891492631509, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 4896680, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11779, "largest_seqno": 14358, "table_properties": {"data_size": 4885962, "index_size": 6583, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27681, "raw_average_key_size": 22, "raw_value_size": 4862418, "raw_average_value_size": 3966, "num_data_blocks": 286, "num_entries": 1226, "num_filter_entries": 1226, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891440, "oldest_key_time": 1763891440, "file_creation_time": 1763891492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 23836 microseconds, and 9965 cpu microseconds. Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:51:32.631593) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 4896680 bytes OK Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:51:32.631622) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:51:32.633976) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:51:32.634019) EVENT_LOG_v1 {"time_micros": 1763891492634013, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:51:32.634039) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 8094031, prev total WAL file size 8098873, number of live WAL files 2. Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:51:32.638103) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end) Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(4781KB)], [18(13MB)] Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891492638173, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 19322638, "oldest_snapshot_seqno": -1} Nov 23 04:51:32 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:32 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:32 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:51:32 localhost ceph-mon[293353]: Reconfiguring crash.np0005532586 (monmap changed)... Nov 23 04:51:32 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain Nov 23 04:51:32 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:32 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:32 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 10612 keys, 16121563 bytes, temperature: kUnknown Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891492713099, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 16121563, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16058824, "index_size": 35118, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26565, "raw_key_size": 284598, "raw_average_key_size": 26, "raw_value_size": 15875189, "raw_average_value_size": 1495, "num_data_blocks": 1346, "num_entries": 10612, "num_filter_entries": 10612, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763891492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:51:32.713448) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 16121563 bytes Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:51:32.715594) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 257.6 rd, 214.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.7, 13.8 +0.0 blob) out(15.4 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 11166, records dropped: 554 output_compression: NoCompression Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:51:32.715625) EVENT_LOG_v1 {"time_micros": 1763891492715611, "job": 8, "event": "compaction_finished", "compaction_time_micros": 75022, "compaction_time_cpu_micros": 42051, "output_level": 6, "num_output_files": 1, "total_output_size": 16121563, "num_input_records": 11166, "num_output_records": 10612, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891492716426, "job": 8, "event": "table_file_deletion", "file_number": 20} Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891492718406, "job": 8, "event": "table_file_deletion", "file_number": 18} Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:51:32.637925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:51:32.718514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:51:32.718520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:51:32.718523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:51:32.718526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:51:32 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:51:32.718529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:51:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:33 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Nov 23 04:51:33 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Nov 23 04:51:33 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005532586.localdomain Nov 23 04:51:33 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005532586.localdomain Nov 23 04:51:33 localhost ceph-mon[293353]: Reconfiguring osd.1 (monmap changed)... Nov 23 04:51:33 localhost ceph-mon[293353]: Reconfiguring daemon osd.1 on np0005532586.localdomain Nov 23 04:51:33 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:33 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:33 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 23 04:51:34 localhost ceph-mon[293353]: Reconfiguring osd.4 (monmap changed)... Nov 23 04:51:34 localhost ceph-mon[293353]: Reconfiguring daemon osd.4 on np0005532586.localdomain Nov 23 04:51:34 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)... Nov 23 04:51:34 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)... Nov 23 04:51:34 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain Nov 23 04:51:34 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain Nov 23 04:51:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:51:34 localhost systemd[1]: tmp-crun.EQgyFL.mount: Deactivated successfully. Nov 23 04:51:34 localhost podman[298080]: 2025-11-23 09:51:34.909555226 +0000 UTC m=+0.089692954 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true) Nov 23 04:51:34 localhost podman[298080]: 2025-11-23 09:51:34.915302844 +0000 UTC m=+0.095440562 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 23 04:51:34 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:51:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:35 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005532586.thmvqb (monmap changed)... Nov 23 04:51:35 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005532586.thmvqb (monmap changed)... Nov 23 04:51:35 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain Nov 23 04:51:35 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain Nov 23 04:51:35 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:35 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:35 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:51:35 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)... Nov 23 04:51:35 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain Nov 23 04:51:35 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:35 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:35 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:51:35 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.34402 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch Nov 23 04:51:35 localhost ceph-mgr[286671]: [cephadm INFO root] Saving service mon spec with placement label:mon Nov 23 04:51:35 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon Nov 23 04:51:36 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev 538cce11-8021-4c50-8301-c332926841a1 (Updating node-proxy deployment (+4 -> 4)) Nov 23 04:51:36 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev 538cce11-8021-4c50-8301-c332926841a1 (Updating node-proxy deployment (+4 -> 4)) Nov 23 04:51:36 localhost ceph-mgr[286671]: [progress INFO root] Completed event 538cce11-8021-4c50-8301-c332926841a1 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Nov 23 04:51:36 localhost openstack_network_exporter[241732]: ERROR 09:51:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:51:36 localhost openstack_network_exporter[241732]: ERROR 09:51:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:51:36 localhost openstack_network_exporter[241732]: ERROR 09:51:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:51:36 localhost openstack_network_exporter[241732]: ERROR 09:51:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:51:36 localhost openstack_network_exporter[241732]: Nov 23 04:51:36 localhost openstack_network_exporter[241732]: ERROR 09:51:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:51:36 localhost openstack_network_exporter[241732]: Nov 23 04:51:36 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532586.thmvqb (monmap changed)... Nov 23 04:51:36 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain Nov 23 04:51:36 localhost ceph-mon[293353]: Saving service mon spec with placement label:mon Nov 23 04:51:36 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:36 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:36 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:36 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:51:36 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:37 localhost ceph-mon[293353]: mon.np0005532584@3(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:51:37 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.26932 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005532585", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Nov 23 04:51:38 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.34418 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005532585"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Nov 23 04:51:38 localhost ceph-mgr[286671]: [cephadm INFO root] Remove daemons mon.np0005532585 Nov 23 04:51:38 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005532585 Nov 23 04:51:38 localhost ceph-mgr[286671]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005532585: new quorum should be ['np0005532583', 'np0005532586', 'np0005532584'] (from ['np0005532583', 'np0005532586', 'np0005532584']) Nov 23 04:51:38 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005532585: new quorum should be ['np0005532583', 'np0005532586', 'np0005532584'] (from ['np0005532583', 'np0005532586', 'np0005532584']) Nov 23 04:51:38 localhost ceph-mgr[286671]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005532585 from monmap... Nov 23 04:51:38 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Removing monitor np0005532585 from monmap... Nov 23 04:51:38 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005532585 from np0005532585.localdomain -- ports [] Nov 23 04:51:38 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005532585 from np0005532585.localdomain -- ports [] Nov 23 04:51:38 localhost ceph-mon[293353]: mon.np0005532584@3(peon) e11 my rank is now 2 (was 3) Nov 23 04:51:38 localhost ceph-mgr[286671]: client.44194 ms_handle_reset on v2:172.18.0.103:3300/0 Nov 23 04:51:38 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : mon.np0005532584 calling monitor election Nov 23 04:51:38 localhost ceph-mon[293353]: paxos.2).electionLogic(42) init, last seen epoch 42 Nov 23 04:51:38 localhost ceph-mon[293353]: mon.np0005532584@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:51:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:51:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:51:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:51:39 localhost systemd[1]: tmp-crun.YwVY4U.mount: Deactivated successfully. Nov 23 04:51:39 localhost podman[298118]: 2025-11-23 09:51:39.905851405 +0000 UTC m=+0.091171180 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3) Nov 23 04:51:39 localhost podman[298118]: 2025-11-23 09:51:39.948559326 +0000 UTC m=+0.133879111 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 04:51:39 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:51:40 localhost podman[298120]: 2025-11-23 09:51:39.999706157 +0000 UTC m=+0.178418288 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Nov 23 04:51:40 localhost podman[298119]: 2025-11-23 09:51:39.95291608 +0000 UTC m=+0.134077757 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:51:40 localhost podman[298119]: 2025-11-23 09:51:40.031603473 +0000 UTC m=+0.212765120 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 04:51:40 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:51:40 localhost podman[298120]: 2025-11-23 09:51:40.056536394 +0000 UTC m=+0.235248565 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 04:51:40 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:51:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:41 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 04:51:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:43 localhost ceph-mon[293353]: paxos.2).electionLogic(43) init, last seen epoch 43, mid-election, bumping Nov 23 04:51:43 localhost ceph-mon[293353]: mon.np0005532584@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:51:43 localhost ceph-mon[293353]: mon.np0005532584@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:51:43 localhost ceph-mon[293353]: mon.np0005532584@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:51:43 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:51:43 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532583.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:43 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532583.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:43 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:43 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:43 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:43 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:43 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:43 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:44 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:44 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:44 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:44 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:44 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:44 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:44 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:44 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:44 localhost ceph-mon[293353]: mon.np0005532586 calling monitor election Nov 23 04:51:44 localhost ceph-mon[293353]: mon.np0005532584 calling monitor election Nov 23 04:51:44 localhost ceph-mon[293353]: mon.np0005532586 calling monitor election Nov 23 04:51:44 localhost ceph-mon[293353]: Health check failed: 1/3 mons down, quorum np0005532583,np0005532586 (MON_DOWN) Nov 23 04:51:44 localhost ceph-mon[293353]: overall HEALTH_OK Nov 23 04:51:44 localhost ceph-mon[293353]: mon.np0005532583 calling monitor election Nov 23 04:51:44 localhost ceph-mon[293353]: mon.np0005532583 is new leader, mons np0005532583,np0005532586,np0005532584 in quorum (ranks 0,1,2) Nov 23 04:51:44 localhost ceph-mon[293353]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005532583,np0005532586) Nov 23 04:51:44 localhost ceph-mon[293353]: Cluster is now healthy Nov 23 04:51:44 localhost ceph-mon[293353]: overall HEALTH_OK Nov 23 04:51:44 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:44 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:44 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:44 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:51:44 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:44 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:44 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:44 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:51:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:45 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev e0c44671-17a1-4e81-9223-eaa4c9d25bb7 (Updating node-proxy deployment (+4 -> 4)) Nov 23 04:51:45 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev e0c44671-17a1-4e81-9223-eaa4c9d25bb7 (Updating node-proxy deployment (+4 -> 4)) Nov 23 04:51:45 localhost ceph-mgr[286671]: [progress INFO root] Completed event e0c44671-17a1-4e81-9223-eaa4c9d25bb7 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Nov 23 04:51:45 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005532583.orhywt (monmap changed)... Nov 23 04:51:45 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005532583.orhywt (monmap changed)... Nov 23 04:51:45 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005532583.orhywt on np0005532583.localdomain Nov 23 04:51:45 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005532583.orhywt on np0005532583.localdomain Nov 23 04:51:46 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:46 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:46 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:46 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:51:46 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:46 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:46 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:46 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:46 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:46 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:46 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:46 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:46 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:46 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532583.orhywt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:51:46 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005532583 (monmap changed)... Nov 23 04:51:46 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005532583 (monmap changed)... Nov 23 04:51:46 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain Nov 23 04:51:46 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain Nov 23 04:51:47 localhost podman[239764]: time="2025-11-23T09:51:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:51:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:47 localhost podman[239764]: @ - - [23/Nov/2025:09:51:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:51:47 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532583.orhywt (monmap changed)... Nov 23 04:51:47 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532583.orhywt on np0005532583.localdomain Nov 23 04:51:47 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:47 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:47 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:51:47 localhost ceph-mon[293353]: mon.np0005532584@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:51:47 localhost podman[239764]: @ - - [23/Nov/2025:09:51:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18225 "" "Go-http-client/1.1" Nov 23 04:51:47 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005532584 (monmap changed)... Nov 23 04:51:47 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005532584 (monmap changed)... Nov 23 04:51:47 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain Nov 23 04:51:47 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain Nov 23 04:51:47 localhost podman[298577]: Nov 23 04:51:47 localhost podman[298577]: 2025-11-23 09:51:47.861239611 +0000 UTC m=+0.075639639 container create b5b1fae677a45337b86d6ea803400ea09fb11c81bbbc6bdb6a4fb045b9699fed (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_banzai, io.openshift.tags=rhceph ceph, architecture=x86_64, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:51:47 localhost systemd[1]: Started libpod-conmon-b5b1fae677a45337b86d6ea803400ea09fb11c81bbbc6bdb6a4fb045b9699fed.scope. Nov 23 04:51:47 localhost systemd[1]: Started libcrun container. Nov 23 04:51:47 localhost podman[298577]: 2025-11-23 09:51:47.829546682 +0000 UTC m=+0.043946740 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:51:47 localhost podman[298577]: 2025-11-23 09:51:47.938040607 +0000 UTC m=+0.152440635 container init b5b1fae677a45337b86d6ea803400ea09fb11c81bbbc6bdb6a4fb045b9699fed (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_banzai, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=553, version=7, ceph=True, GIT_BRANCH=main, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 04:51:47 localhost podman[298577]: 2025-11-23 09:51:47.947402236 +0000 UTC m=+0.161802264 container start b5b1fae677a45337b86d6ea803400ea09fb11c81bbbc6bdb6a4fb045b9699fed (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_banzai, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=Guillaume Abrioux , distribution-scope=public, io.openshift.expose-services=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, RELEASE=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, release=553, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Nov 23 04:51:47 localhost podman[298577]: 2025-11-23 09:51:47.949296105 +0000 UTC m=+0.163696133 container attach b5b1fae677a45337b86d6ea803400ea09fb11c81bbbc6bdb6a4fb045b9699fed (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_banzai, distribution-scope=public, GIT_CLEAN=True, version=7, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, architecture=x86_64, ceph=True) Nov 23 04:51:47 localhost jovial_banzai[298592]: 167 167 Nov 23 04:51:47 localhost systemd[1]: libpod-b5b1fae677a45337b86d6ea803400ea09fb11c81bbbc6bdb6a4fb045b9699fed.scope: Deactivated successfully. Nov 23 04:51:47 localhost podman[298577]: 2025-11-23 09:51:47.954609129 +0000 UTC m=+0.169009157 container died b5b1fae677a45337b86d6ea803400ea09fb11c81bbbc6bdb6a4fb045b9699fed (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_banzai, name=rhceph, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, RELEASE=main, maintainer=Guillaume Abrioux , GIT_BRANCH=main, description=Red Hat Ceph Storage 7, version=7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:51:48 localhost systemd[1]: tmp-crun.4LMJgA.mount: Deactivated successfully. Nov 23 04:51:48 localhost podman[298597]: 2025-11-23 09:51:48.056103477 +0000 UTC m=+0.091608294 container remove b5b1fae677a45337b86d6ea803400ea09fb11c81bbbc6bdb6a4fb045b9699fed (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_banzai, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, ceph=True, GIT_CLEAN=True, release=553, architecture=x86_64, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:51:48 localhost systemd[1]: libpod-conmon-b5b1fae677a45337b86d6ea803400ea09fb11c81bbbc6bdb6a4fb045b9699fed.scope: Deactivated successfully. Nov 23 04:51:48 localhost ceph-mon[293353]: Reconfiguring crash.np0005532583 (monmap changed)... Nov 23 04:51:48 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain Nov 23 04:51:48 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:48 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:48 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:51:48 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Nov 23 04:51:48 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Nov 23 04:51:48 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:51:48 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:51:48 localhost podman[298668]: Nov 23 04:51:48 localhost podman[298668]: 2025-11-23 09:51:48.734806373 +0000 UTC m=+0.071088999 container create 5cf819a3be88e25c243c2bd14534435e74b51d03f3318ccf34802c8a462b2cd9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_heyrovsky, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, build-date=2025-09-24T08:57:55, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-type=git, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., version=7, ceph=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:51:48 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 04:51:48 localhost systemd[1]: Started libpod-conmon-5cf819a3be88e25c243c2bd14534435e74b51d03f3318ccf34802c8a462b2cd9.scope. Nov 23 04:51:48 localhost systemd[1]: Started libcrun container. Nov 23 04:51:48 localhost podman[298668]: 2025-11-23 09:51:48.80490904 +0000 UTC m=+0.141191666 container init 5cf819a3be88e25c243c2bd14534435e74b51d03f3318ccf34802c8a462b2cd9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_heyrovsky, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, RELEASE=main, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, version=7, release=553, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64) Nov 23 04:51:48 localhost podman[298668]: 2025-11-23 09:51:48.706450466 +0000 UTC m=+0.042733122 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:51:48 localhost podman[298668]: 2025-11-23 09:51:48.813763504 +0000 UTC m=+0.150046120 container start 5cf819a3be88e25c243c2bd14534435e74b51d03f3318ccf34802c8a462b2cd9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_heyrovsky, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, version=7, RELEASE=main, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 23 04:51:48 localhost podman[298668]: 2025-11-23 09:51:48.814045263 +0000 UTC m=+0.150327909 container attach 5cf819a3be88e25c243c2bd14534435e74b51d03f3318ccf34802c8a462b2cd9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_heyrovsky, name=rhceph, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 04:51:48 localhost stupefied_heyrovsky[298683]: 167 167 Nov 23 04:51:48 localhost systemd[1]: libpod-5cf819a3be88e25c243c2bd14534435e74b51d03f3318ccf34802c8a462b2cd9.scope: Deactivated successfully. Nov 23 04:51:48 localhost podman[298668]: 2025-11-23 09:51:48.816688664 +0000 UTC m=+0.152971320 container died 5cf819a3be88e25c243c2bd14534435e74b51d03f3318ccf34802c8a462b2cd9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_heyrovsky, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, release=553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, RELEASE=main, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:51:48 localhost systemd[1]: var-lib-containers-storage-overlay-211cbd668c8da1d9a7dad5ada66641392a1c2af9d440ab0f3a16e8555a9711ed-merged.mount: Deactivated successfully. Nov 23 04:51:48 localhost systemd[1]: var-lib-containers-storage-overlay-923f023b3317e779aeaa045aac563554b136f34a45bdbfbb7e75b7858ba3ad03-merged.mount: Deactivated successfully. Nov 23 04:51:48 localhost podman[298688]: 2025-11-23 09:51:48.94493167 +0000 UTC m=+0.116383409 container remove 5cf819a3be88e25c243c2bd14534435e74b51d03f3318ccf34802c8a462b2cd9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_heyrovsky, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_BRANCH=main, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, ceph=True, RELEASE=main, vcs-type=git, version=7, distribution-scope=public, GIT_CLEAN=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Nov 23 04:51:48 localhost systemd[1]: libpod-conmon-5cf819a3be88e25c243c2bd14534435e74b51d03f3318ccf34802c8a462b2cd9.scope: Deactivated successfully. Nov 23 04:51:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:49 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Nov 23 04:51:49 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Nov 23 04:51:49 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:51:49 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:51:49 localhost ceph-mon[293353]: Reconfiguring crash.np0005532584 (monmap changed)... Nov 23 04:51:49 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain Nov 23 04:51:49 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:49 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:49 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 23 04:51:49 localhost ceph-mon[293353]: Reconfiguring osd.2 (monmap changed)... Nov 23 04:51:49 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:49 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:49 localhost podman[298764]: Nov 23 04:51:49 localhost podman[298764]: 2025-11-23 09:51:49.720427719 +0000 UTC m=+0.073690849 container create c6732d11f26798bb1700c4fc86cdd9461ac88b705dc47c4d4249b47a8951ddb4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_benz, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, version=7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, architecture=x86_64, release=553, vcs-type=git) Nov 23 04:51:49 localhost systemd[1]: Started libpod-conmon-c6732d11f26798bb1700c4fc86cdd9461ac88b705dc47c4d4249b47a8951ddb4.scope. Nov 23 04:51:49 localhost systemd[1]: Started libcrun container. Nov 23 04:51:49 localhost podman[298764]: 2025-11-23 09:51:49.782869159 +0000 UTC m=+0.136132309 container init c6732d11f26798bb1700c4fc86cdd9461ac88b705dc47c4d4249b47a8951ddb4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_benz, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vcs-type=git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, name=rhceph, RELEASE=main, description=Red Hat Ceph Storage 7, release=553, io.openshift.tags=rhceph ceph) Nov 23 04:51:49 localhost podman[298764]: 2025-11-23 09:51:49.690015118 +0000 UTC m=+0.043278298 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:51:49 localhost podman[298764]: 2025-11-23 09:51:49.791660061 +0000 UTC m=+0.144923191 container start c6732d11f26798bb1700c4fc86cdd9461ac88b705dc47c4d4249b47a8951ddb4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_benz, ceph=True, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, release=553, GIT_BRANCH=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container) Nov 23 04:51:49 localhost podman[298764]: 2025-11-23 09:51:49.791902758 +0000 UTC m=+0.145165888 container attach c6732d11f26798bb1700c4fc86cdd9461ac88b705dc47c4d4249b47a8951ddb4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_benz, maintainer=Guillaume Abrioux , RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.openshift.expose-services=, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., release=553, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, distribution-scope=public, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:51:49 localhost nice_benz[298779]: 167 167 Nov 23 04:51:49 localhost systemd[1]: libpod-c6732d11f26798bb1700c4fc86cdd9461ac88b705dc47c4d4249b47a8951ddb4.scope: Deactivated successfully. Nov 23 04:51:49 localhost podman[298764]: 2025-11-23 09:51:49.794491469 +0000 UTC m=+0.147754629 container died c6732d11f26798bb1700c4fc86cdd9461ac88b705dc47c4d4249b47a8951ddb4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_benz, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.openshift.expose-services=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64, vcs-type=git, ceph=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553) Nov 23 04:51:49 localhost systemd[1]: var-lib-containers-storage-overlay-c2031117ecc22eba82533e90c9eaed36bd7710829763fd1a09886a4e79f0468b-merged.mount: Deactivated successfully. Nov 23 04:51:49 localhost podman[298784]: 2025-11-23 09:51:49.885544925 +0000 UTC m=+0.078289042 container remove c6732d11f26798bb1700c4fc86cdd9461ac88b705dc47c4d4249b47a8951ddb4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_benz, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, ceph=True, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:51:49 localhost systemd[1]: libpod-conmon-c6732d11f26798bb1700c4fc86cdd9461ac88b705dc47c4d4249b47a8951ddb4.scope: Deactivated successfully. Nov 23 04:51:50 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)... Nov 23 04:51:50 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)... Nov 23 04:51:50 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain Nov 23 04:51:50 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain Nov 23 04:51:50 localhost ceph-mon[293353]: Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:51:50 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:50 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 23 04:51:50 localhost ceph-mon[293353]: Reconfiguring osd.5 (monmap changed)... Nov 23 04:51:50 localhost ceph-mon[293353]: Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:51:50 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:50 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:50 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:51:50 localhost podman[298859]: Nov 23 04:51:50 localhost podman[298859]: 2025-11-23 09:51:50.656947236 +0000 UTC m=+0.069968204 container create d2ed4e7b9f49d50acecd3abed699b6126d74b089d006665d7ce05d4a49f6389c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_austin, vcs-type=git, ceph=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, distribution-scope=public, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, architecture=x86_64, release=553, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 04:51:50 localhost systemd[1]: Started libpod-conmon-d2ed4e7b9f49d50acecd3abed699b6126d74b089d006665d7ce05d4a49f6389c.scope. Nov 23 04:51:50 localhost systemd[1]: Started libcrun container. Nov 23 04:51:50 localhost podman[298859]: 2025-11-23 09:51:50.72174709 +0000 UTC m=+0.134768068 container init d2ed4e7b9f49d50acecd3abed699b6126d74b089d006665d7ce05d4a49f6389c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_austin, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, ceph=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , version=7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, distribution-scope=public) Nov 23 04:51:50 localhost podman[298859]: 2025-11-23 09:51:50.626285308 +0000 UTC m=+0.039306316 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:51:50 localhost podman[298859]: 2025-11-23 09:51:50.729730257 +0000 UTC m=+0.142751235 container start d2ed4e7b9f49d50acecd3abed699b6126d74b089d006665d7ce05d4a49f6389c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_austin, io.openshift.tags=rhceph ceph, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, distribution-scope=public, com.redhat.component=rhceph-container, version=7, build-date=2025-09-24T08:57:55, name=rhceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:51:50 localhost podman[298859]: 2025-11-23 09:51:50.730007065 +0000 UTC m=+0.143028083 container attach d2ed4e7b9f49d50acecd3abed699b6126d74b089d006665d7ce05d4a49f6389c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_austin, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., RELEASE=main, vcs-type=git, architecture=x86_64, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:51:50 localhost wizardly_austin[298874]: 167 167 Nov 23 04:51:50 localhost systemd[1]: libpod-d2ed4e7b9f49d50acecd3abed699b6126d74b089d006665d7ce05d4a49f6389c.scope: Deactivated successfully. Nov 23 04:51:50 localhost podman[298859]: 2025-11-23 09:51:50.732162152 +0000 UTC m=+0.145183130 container died d2ed4e7b9f49d50acecd3abed699b6126d74b089d006665d7ce05d4a49f6389c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_austin, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, vendor=Red Hat, Inc., distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_CLEAN=True, name=rhceph, release=553, com.redhat.component=rhceph-container, ceph=True, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, build-date=2025-09-24T08:57:55) Nov 23 04:51:50 localhost podman[298879]: 2025-11-23 09:51:50.825018533 +0000 UTC m=+0.080508509 container remove d2ed4e7b9f49d50acecd3abed699b6126d74b089d006665d7ce05d4a49f6389c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_austin, io.openshift.tags=rhceph ceph, RELEASE=main, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.buildah.version=1.33.12) Nov 23 04:51:50 localhost systemd[1]: libpod-conmon-d2ed4e7b9f49d50acecd3abed699b6126d74b089d006665d7ce05d4a49f6389c.scope: Deactivated successfully. Nov 23 04:51:50 localhost systemd[1]: var-lib-containers-storage-overlay-d01a3b506d57c7ee9c9d0a52766848d9532b1fa5a3fe1e31992bdda70a58be94-merged.mount: Deactivated successfully. Nov 23 04:51:50 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005532584.naxwxy (monmap changed)... Nov 23 04:51:50 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005532584.naxwxy (monmap changed)... Nov 23 04:51:50 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain Nov 23 04:51:50 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain Nov 23 04:51:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:51:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:51 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_09:51:51 Nov 23 04:51:51 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 04:51:51 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 04:51:51 localhost ceph-mgr[286671]: [balancer INFO root] pools ['images', 'backups', 'manila_data', '.mgr', 'manila_metadata', 'vms', 'volumes'] Nov 23 04:51:51 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 04:51:51 localhost systemd[1]: tmp-crun.nox7OB.mount: Deactivated successfully. Nov 23 04:51:51 localhost podman[298914]: 2025-11-23 09:51:51.11984609 +0000 UTC m=+0.094164983 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350) Nov 23 04:51:51 localhost podman[298914]: 2025-11-23 09:51:51.137875137 +0000 UTC m=+0.112194010 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container) Nov 23 04:51:51 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:51:51 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)... Nov 23 04:51:51 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain Nov 23 04:51:51 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:51 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:51 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:51:51 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 04:51:51 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:51:51 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 04:51:51 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:51:51 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Nov 23 04:51:51 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:51:51 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 04:51:51 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:51:51 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Nov 23 04:51:51 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:51:51 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 04:51:51 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:51:51 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 04:51:51 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:51:51 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.1810441094360693e-06 of space, bias 4.0, pg target 0.001741927228736274 quantized to 16 (current 16) Nov 23 04:51:51 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:51:51 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:51:51 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 04:51:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 04:51:51 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:51:51 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:51:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 04:51:51 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:51:51 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:51:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 04:51:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 04:51:51 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 04:51:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 04:51:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 04:51:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 04:51:51 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 04:51:51 localhost podman[298968]: Nov 23 04:51:51 localhost podman[298968]: 2025-11-23 09:51:51.525816252 +0000 UTC m=+0.074731991 container create 605a21cc09ec6a5c1fd8eef27727529961a7b37df9e8acfbb88dde6c10755015 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_bardeen, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, release=553, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-type=git, build-date=2025-09-24T08:57:55) Nov 23 04:51:51 localhost systemd[1]: Started libpod-conmon-605a21cc09ec6a5c1fd8eef27727529961a7b37df9e8acfbb88dde6c10755015.scope. Nov 23 04:51:51 localhost systemd[1]: Started libcrun container. Nov 23 04:51:51 localhost podman[298968]: 2025-11-23 09:51:51.591186674 +0000 UTC m=+0.140102403 container init 605a21cc09ec6a5c1fd8eef27727529961a7b37df9e8acfbb88dde6c10755015 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_bardeen, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vendor=Red Hat, Inc., release=553, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , distribution-scope=public, version=7, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-type=git) Nov 23 04:51:51 localhost podman[298968]: 2025-11-23 09:51:51.495815545 +0000 UTC m=+0.044731304 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:51:51 localhost podman[298968]: 2025-11-23 09:51:51.601022028 +0000 UTC m=+0.149937757 container start 605a21cc09ec6a5c1fd8eef27727529961a7b37df9e8acfbb88dde6c10755015 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_bardeen, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhceph, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, vendor=Red Hat, Inc., RELEASE=main, ceph=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, architecture=x86_64) Nov 23 04:51:51 localhost podman[298968]: 2025-11-23 09:51:51.60143779 +0000 UTC m=+0.150353579 container attach 605a21cc09ec6a5c1fd8eef27727529961a7b37df9e8acfbb88dde6c10755015 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_bardeen, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=553, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., distribution-scope=public, GIT_BRANCH=main) Nov 23 04:51:51 localhost exciting_bardeen[298982]: 167 167 Nov 23 04:51:51 localhost systemd[1]: libpod-605a21cc09ec6a5c1fd8eef27727529961a7b37df9e8acfbb88dde6c10755015.scope: Deactivated successfully. Nov 23 04:51:51 localhost podman[298968]: 2025-11-23 09:51:51.604062962 +0000 UTC m=+0.152978721 container died 605a21cc09ec6a5c1fd8eef27727529961a7b37df9e8acfbb88dde6c10755015 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_bardeen, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, release=553, ceph=True, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 04:51:51 localhost podman[298987]: 2025-11-23 09:51:51.703067034 +0000 UTC m=+0.083704430 container remove 605a21cc09ec6a5c1fd8eef27727529961a7b37df9e8acfbb88dde6c10755015 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_bardeen, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, release=553, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12) Nov 23 04:51:51 localhost systemd[1]: libpod-conmon-605a21cc09ec6a5c1fd8eef27727529961a7b37df9e8acfbb88dde6c10755015.scope: Deactivated successfully. Nov 23 04:51:51 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005532585 (monmap changed)... Nov 23 04:51:51 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005532585 (monmap changed)... Nov 23 04:51:51 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain Nov 23 04:51:51 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain Nov 23 04:51:51 localhost systemd[1]: var-lib-containers-storage-overlay-dfe8a21aa8506acf5baaabcac58fa9faed8b383264186698944bea44f930c2c3-merged.mount: Deactivated successfully. Nov 23 04:51:52 localhost ceph-mon[293353]: mon.np0005532584@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:51:52 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)... Nov 23 04:51:52 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain Nov 23 04:51:52 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:52 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:52 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:51:52 localhost ceph-mon[293353]: Reconfiguring crash.np0005532585 (monmap changed)... Nov 23 04:51:52 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain Nov 23 04:51:52 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Nov 23 04:51:52 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Nov 23 04:51:52 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005532585.localdomain Nov 23 04:51:52 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005532585.localdomain Nov 23 04:51:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:53 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:53 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:53 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 23 04:51:53 localhost ceph-mon[293353]: Reconfiguring osd.0 (monmap changed)... Nov 23 04:51:53 localhost ceph-mon[293353]: Reconfiguring daemon osd.0 on np0005532585.localdomain Nov 23 04:51:53 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Nov 23 04:51:53 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Nov 23 04:51:53 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005532585.localdomain Nov 23 04:51:53 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005532585.localdomain Nov 23 04:51:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:51:54 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:54 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:54 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 23 04:51:54 localhost ceph-mon[293353]: Reconfiguring osd.3 (monmap changed)... Nov 23 04:51:54 localhost ceph-mon[293353]: Reconfiguring daemon osd.3 on np0005532585.localdomain Nov 23 04:51:54 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)... Nov 23 04:51:54 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)... Nov 23 04:51:54 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain Nov 23 04:51:54 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain Nov 23 04:51:54 localhost podman[299003]: 2025-11-23 09:51:54.898510888 +0000 UTC m=+0.080327864 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:51:54 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.26940 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005532585.localdomain:172.18.0.104", "target": ["mon-mgr", ""]}]: dispatch Nov 23 04:51:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:51:54 localhost podman[299003]: 2025-11-23 09:51:54.915443012 +0000 UTC m=+0.097260008 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:51:54 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Deploying daemon mon.np0005532585 on np0005532585.localdomain Nov 23 04:51:54 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Deploying daemon mon.np0005532585 on np0005532585.localdomain Nov 23 04:51:54 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:51:54 localhost podman[299026]: 2025-11-23 09:51:54.99849905 +0000 UTC m=+0.075823455 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118) Nov 23 04:51:55 localhost podman[299026]: 2025-11-23 09:51:55.040600922 +0000 UTC m=+0.117925387 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible) Nov 23 04:51:55 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:51:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:55 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005532585.gzafiw (monmap changed)... Nov 23 04:51:55 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005532585.gzafiw (monmap changed)... Nov 23 04:51:55 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain Nov 23 04:51:55 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain Nov 23 04:51:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:51:55 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)... Nov 23 04:51:55 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain Nov 23 04:51:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:51:55 localhost ceph-mon[293353]: Deploying daemon mon.np0005532585 on np0005532585.localdomain Nov 23 04:51:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:55 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:51:56 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)... Nov 23 04:51:56 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain Nov 23 04:51:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:57 localhost ceph-mon[293353]: mon.np0005532584@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:51:57 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:51:57 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:51:57 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:57 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:58 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005532586 (monmap changed)... Nov 23 04:51:58 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005532586 (monmap changed)... Nov 23 04:51:58 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain Nov 23 04:51:58 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain Nov 23 04:51:58 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:51:58 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mon.np0005532585 172.18.0.107:0/2529466979; not ready for session (expect reconnect) Nov 23 04:51:58 localhost ceph-mgr[286671]: mgr finish mon failed to return metadata for mon.np0005532585: (2) No such file or directory Nov 23 04:51:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:51:59 localhost nova_compute[280939]: 2025-11-23 09:51:59.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:51:59 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Nov 23 04:51:59 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Nov 23 04:51:59 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005532586.localdomain Nov 23 04:51:59 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005532586.localdomain Nov 23 04:51:59 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:59 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:59 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:51:59 localhost ceph-mon[293353]: Reconfiguring crash.np0005532586 (monmap changed)... Nov 23 04:51:59 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain Nov 23 04:51:59 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:59 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:51:59 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 23 04:51:59 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mon.np0005532585 172.18.0.107:0/2529466979; not ready for session (expect reconnect) Nov 23 04:51:59 localhost ceph-mgr[286671]: mgr finish mon failed to return metadata for mon.np0005532585: (2) No such file or directory Nov 23 04:52:00 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Nov 23 04:52:00 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Nov 23 04:52:00 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005532586.localdomain Nov 23 04:52:00 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005532586.localdomain Nov 23 04:52:00 localhost ceph-mon[293353]: Reconfiguring osd.1 (monmap changed)... Nov 23 04:52:00 localhost ceph-mon[293353]: Reconfiguring daemon osd.1 on np0005532586.localdomain Nov 23 04:52:00 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:00 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:00 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 23 04:52:00 localhost ceph-mon[293353]: Reconfiguring osd.4 (monmap changed)... Nov 23 04:52:00 localhost ceph-mon[293353]: Reconfiguring daemon osd.4 on np0005532586.localdomain Nov 23 04:52:00 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:00 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mon.np0005532585 172.18.0.107:0/2529466979; not ready for session (expect reconnect) Nov 23 04:52:00 localhost ceph-mgr[286671]: mgr finish mon failed to return metadata for mon.np0005532585: (2) No such file or directory Nov 23 04:52:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:52:01 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)... Nov 23 04:52:01 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)... Nov 23 04:52:01 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain Nov 23 04:52:01 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain Nov 23 04:52:01 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:01 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:01 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:52:01 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)... Nov 23 04:52:01 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain Nov 23 04:52:01 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mon.np0005532585 172.18.0.107:0/2529466979; not ready for session (expect reconnect) Nov 23 04:52:01 localhost ceph-mgr[286671]: mgr finish mon failed to return metadata for mon.np0005532585: (2) No such file or directory Nov 23 04:52:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 04:52:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5493 writes, 24K keys, 5493 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5493 writes, 796 syncs, 6.90 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 123 writes, 269 keys, 123 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s#012Interval WAL: 123 writes, 61 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 04:52:01 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005532586.thmvqb (monmap changed)... Nov 23 04:52:01 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005532586.thmvqb (monmap changed)... Nov 23 04:52:02 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain Nov 23 04:52:02 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain Nov 23 04:52:02 localhost ceph-mon[293353]: mon.np0005532584@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:52:02 localhost nova_compute[280939]: 2025-11-23 09:52:02.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:52:02 localhost nova_compute[280939]: 2025-11-23 09:52:02.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:52:02 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:02 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:02 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:52:02 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532586.thmvqb (monmap changed)... Nov 23 04:52:02 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain Nov 23 04:52:02 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:02 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mon.np0005532585 172.18.0.107:0/2529466979; not ready for session (expect reconnect) Nov 23 04:52:02 localhost ceph-mgr[286671]: mgr finish mon failed to return metadata for mon.np0005532585: (2) No such file or directory Nov 23 04:52:02 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:52:03 localhost nova_compute[280939]: 2025-11-23 09:52:03.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:52:03 localhost nova_compute[280939]: 2025-11-23 09:52:03.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:52:03 localhost nova_compute[280939]: 2025-11-23 09:52:03.134 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:52:03 localhost nova_compute[280939]: 2025-11-23 09:52:03.152 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:52:03 localhost nova_compute[280939]: 2025-11-23 09:52:03.152 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:52:03 localhost nova_compute[280939]: 2025-11-23 09:52:03.152 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:52:03 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:03 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:03 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mon.np0005532585 172.18.0.107:0/2529466979; not ready for session (expect reconnect) Nov 23 04:52:03 localhost ceph-mgr[286671]: mgr finish mon failed to return metadata for mon.np0005532585: (2) No such file or directory Nov 23 04:52:04 localhost nova_compute[280939]: 2025-11-23 09:52:04.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:52:04 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mon.np0005532585 172.18.0.107:0/2529466979; not ready for session (expect reconnect) Nov 23 04:52:04 localhost ceph-mgr[286671]: mgr finish mon failed to return metadata for mon.np0005532585: (2) No such file or directory Nov 23 04:52:04 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:52:05 localhost nova_compute[280939]: 2025-11-23 09:52:05.128 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:52:05 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mon.np0005532585 172.18.0.107:0/2529466979; not ready for session (expect reconnect) Nov 23 04:52:05 localhost ceph-mgr[286671]: mgr finish mon failed to return metadata for mon.np0005532585: (2) No such file or directory Nov 23 04:52:05 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:05 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:05 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev 3121e4f2-8777-4b4c-bab5-183fb8fd7c5d (Updating node-proxy deployment (+4 -> 4)) Nov 23 04:52:05 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev 3121e4f2-8777-4b4c-bab5-183fb8fd7c5d (Updating node-proxy deployment (+4 -> 4)) Nov 23 04:52:05 localhost ceph-mgr[286671]: [progress INFO root] Completed event 3121e4f2-8777-4b4c-bab5-183fb8fd7c5d (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Nov 23 04:52:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:52:05 localhost podman[299131]: 2025-11-23 09:52:05.76639122 +0000 UTC m=+0.083570975 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent) Nov 23 04:52:05 localhost podman[299131]: 2025-11-23 09:52:05.775443979 +0000 UTC m=+0.092623714 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:52:05 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:52:06 localhost nova_compute[280939]: 2025-11-23 09:52:06.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:52:06 localhost nova_compute[280939]: 2025-11-23 09:52:06.149 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:52:06 localhost nova_compute[280939]: 2025-11-23 09:52:06.149 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:52:06 localhost nova_compute[280939]: 2025-11-23 09:52:06.150 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:52:06 localhost nova_compute[280939]: 2025-11-23 09:52:06.150 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:52:06 localhost nova_compute[280939]: 2025-11-23 09:52:06.150 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:52:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 04:52:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5358 writes, 23K keys, 5358 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5358 writes, 729 syncs, 7.35 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 153 writes, 504 keys, 153 commit groups, 1.0 writes per commit group, ingest: 0.72 MB, 0.00 MB/s#012Interval WAL: 153 writes, 64 syncs, 2.39 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 04:52:06 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mon.np0005532585 172.18.0.107:0/2529466979; not ready for session (expect reconnect) Nov 23 04:52:06 localhost ceph-mgr[286671]: mgr finish mon failed to return metadata for mon.np0005532585: (2) No such file or directory Nov 23 04:52:06 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:06 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:52:06 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:06 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:06 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:52:06 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/230438963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:52:06 localhost nova_compute[280939]: 2025-11-23 09:52:06.600 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:52:06 localhost openstack_network_exporter[241732]: ERROR 09:52:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:52:06 localhost openstack_network_exporter[241732]: ERROR 09:52:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:52:06 localhost openstack_network_exporter[241732]: ERROR 09:52:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:52:06 localhost openstack_network_exporter[241732]: ERROR 09:52:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:52:06 localhost openstack_network_exporter[241732]: Nov 23 04:52:06 localhost openstack_network_exporter[241732]: ERROR 09:52:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:52:06 localhost openstack_network_exporter[241732]: Nov 23 04:52:06 localhost nova_compute[280939]: 2025-11-23 09:52:06.853 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:52:06 localhost nova_compute[280939]: 2025-11-23 09:52:06.855 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12310MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:52:06 localhost nova_compute[280939]: 2025-11-23 09:52:06.856 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:52:06 localhost nova_compute[280939]: 2025-11-23 09:52:06.857 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:52:06 localhost nova_compute[280939]: 2025-11-23 09:52:06.936 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:52:06 localhost nova_compute[280939]: 2025-11-23 09:52:06.936 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:52:06 localhost nova_compute[280939]: 2025-11-23 09:52:06.968 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:52:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:52:07 localhost ceph-mon[293353]: mon.np0005532584@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:52:07 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mon.np0005532585 172.18.0.107:0/2529466979; not ready for session (expect reconnect) Nov 23 04:52:07 localhost ceph-mgr[286671]: mgr finish mon failed to return metadata for mon.np0005532585: (2) No such file or directory Nov 23 04:52:07 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:52:07 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2283958204' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:52:07 localhost nova_compute[280939]: 2025-11-23 09:52:07.420 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:52:07 localhost nova_compute[280939]: 2025-11-23 09:52:07.427 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:52:07 localhost nova_compute[280939]: 2025-11-23 09:52:07.452 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:52:07 localhost nova_compute[280939]: 2025-11-23 09:52:07.454 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:52:07 localhost nova_compute[280939]: 2025-11-23 09:52:07.455 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:52:08 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mon.np0005532585 172.18.0.107:0/2529466979; not ready for session (expect reconnect) Nov 23 04:52:08 localhost ceph-mgr[286671]: mgr finish mon failed to return metadata for mon.np0005532585: (2) No such file or directory Nov 23 04:52:08 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:08 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 04:52:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:52:09 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mon.np0005532585 172.18.0.107:0/2529466979; not ready for session (expect reconnect) Nov 23 04:52:09 localhost ceph-mgr[286671]: mgr finish mon failed to return metadata for mon.np0005532585: (2) No such file or directory Nov 23 04:52:09 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:52:09.734 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:52:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:52:09.735 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:52:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:52:09.735 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:52:10 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mon.np0005532585 172.18.0.107:0/2529466979; not ready for session (expect reconnect) Nov 23 04:52:10 localhost ceph-mgr[286671]: mgr finish mon failed to return metadata for mon.np0005532585: (2) No such file or directory Nov 23 04:52:10 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:10 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.26985 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch Nov 23 04:52:10 localhost ceph-mgr[286671]: [cephadm INFO root] Reconfig service osd.default_drive_group Nov 23 04:52:10 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfig service osd.default_drive_group Nov 23 04:52:10 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev 5aeb72e2-7ad0-4ab8-a1b0-951477df66cc (Updating node-proxy deployment (+4 -> 4)) Nov 23 04:52:10 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev 5aeb72e2-7ad0-4ab8-a1b0-951477df66cc (Updating node-proxy deployment (+4 -> 4)) Nov 23 04:52:10 localhost ceph-mgr[286671]: [progress INFO root] Completed event 5aeb72e2-7ad0-4ab8-a1b0-951477df66cc (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Nov 23 04:52:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:52:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:52:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:52:10 localhost systemd[1]: tmp-crun.mXUNhE.mount: Deactivated successfully. Nov 23 04:52:10 localhost podman[299196]: 2025-11-23 09:52:10.910426365 +0000 UTC m=+0.095580536 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:52:10 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:10 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:10 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:52:10 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:10 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:10 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:10 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:10 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:10 localhost podman[299195]: 2025-11-23 09:52:10.960295457 +0000 UTC m=+0.148080610 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd) Nov 23 04:52:11 localhost podman[299197]: 2025-11-23 09:52:11.010713426 +0000 UTC m=+0.193427962 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 23 04:52:11 localhost podman[299196]: 2025-11-23 09:52:11.024682928 +0000 UTC m=+0.209837219 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:52:11 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:52:11 localhost podman[299197]: 2025-11-23 09:52:11.049504366 +0000 UTC m=+0.232218902 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:52:11 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:52:11 localhost podman[299195]: 2025-11-23 09:52:11.077715478 +0000 UTC m=+0.265500681 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118) Nov 23 04:52:11 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:52:11 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:52:11 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:52:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Nov 23 04:52:11 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mon.np0005532585 172.18.0.107:0/2529466979; not ready for session (expect reconnect) Nov 23 04:52:11 localhost ceph-mgr[286671]: mgr finish mon failed to return metadata for mon.np0005532585: (2) No such file or directory Nov 23 04:52:11 localhost nova_compute[280939]: 2025-11-23 09:52:11.456 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:52:11 localhost podman[299334]: Nov 23 04:52:11 localhost podman[299334]: 2025-11-23 09:52:11.669448215 +0000 UTC m=+0.075981531 container create 32986e30bced53c17866f402f1f9ac41b19be595c0dfd3226500f887ae343287 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_tharp, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_CLEAN=True, ceph=True, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, maintainer=Guillaume Abrioux , version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git) Nov 23 04:52:11 localhost systemd[1]: Started libpod-conmon-32986e30bced53c17866f402f1f9ac41b19be595c0dfd3226500f887ae343287.scope. Nov 23 04:52:11 localhost systemd[1]: Started libcrun container. Nov 23 04:52:11 localhost podman[299334]: 2025-11-23 09:52:11.639385765 +0000 UTC m=+0.045919101 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:52:11 localhost podman[299334]: 2025-11-23 09:52:11.746719634 +0000 UTC m=+0.153252940 container init 32986e30bced53c17866f402f1f9ac41b19be595c0dfd3226500f887ae343287 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_tharp, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vendor=Red Hat, Inc., release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.openshift.tags=rhceph ceph, architecture=x86_64, ceph=True, version=7) Nov 23 04:52:11 localhost podman[299334]: 2025-11-23 09:52:11.757747005 +0000 UTC m=+0.164280321 container start 32986e30bced53c17866f402f1f9ac41b19be595c0dfd3226500f887ae343287 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_tharp, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, ceph=True, release=553, GIT_BRANCH=main, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7) Nov 23 04:52:11 localhost podman[299334]: 2025-11-23 09:52:11.757970832 +0000 UTC m=+0.164504188 container attach 32986e30bced53c17866f402f1f9ac41b19be595c0dfd3226500f887ae343287 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_tharp, GIT_BRANCH=main, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_CLEAN=True, distribution-scope=public, vcs-type=git, CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:52:11 localhost tender_tharp[299349]: 167 167 Nov 23 04:52:11 localhost systemd[1]: libpod-32986e30bced53c17866f402f1f9ac41b19be595c0dfd3226500f887ae343287.scope: Deactivated successfully. Nov 23 04:52:11 localhost podman[299334]: 2025-11-23 09:52:11.762055598 +0000 UTC m=+0.168588914 container died 32986e30bced53c17866f402f1f9ac41b19be595c0dfd3226500f887ae343287 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_tharp, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., distribution-scope=public, version=7, GIT_BRANCH=main, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:52:11 localhost podman[299354]: 2025-11-23 09:52:11.859259084 +0000 UTC m=+0.084107542 container remove 32986e30bced53c17866f402f1f9ac41b19be595c0dfd3226500f887ae343287 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_tharp, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, ceph=True, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-type=git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55) Nov 23 04:52:11 localhost systemd[1]: libpod-conmon-32986e30bced53c17866f402f1f9ac41b19be595c0dfd3226500f887ae343287.scope: Deactivated successfully. Nov 23 04:52:11 localhost systemd[1]: var-lib-containers-storage-overlay-8dc5782859b5c3c47d41c264b9e52b70fb529bb0ecbf0814cfa904bae4562157-merged.mount: Deactivated successfully. Nov 23 04:52:11 localhost ceph-mon[293353]: Reconfig service osd.default_drive_group Nov 23 04:52:11 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:11 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:11 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:11 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:11 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:11 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:11 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 23 04:52:11 localhost ceph-mon[293353]: Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:52:12 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:52:12 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:52:12 localhost ceph-mon[293353]: mon.np0005532584@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mon.np0005532585 172.18.0.107:0/2529466979; not ready for session (expect reconnect) Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr finish mon failed to return metadata for mon.np0005532585: (2) No such file or directory Nov 23 04:52:12 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:12 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:12 localhost ceph-mon[293353]: mon.np0005532584@2(peon).osd e85 e85: 6 total, 6 up, 6 in Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr handle_mgr_map I was active but no longer am Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr respawn e: '/usr/bin/ceph-mgr' Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr respawn 0: '/usr/bin/ceph-mgr' Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr respawn 1: '-n' Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr respawn 2: 'mgr.np0005532584.naxwxy' Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr respawn 3: '-f' Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr respawn 4: '--setuser' Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr respawn 5: 'ceph' Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr respawn 6: '--setgroup' Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr respawn 7: 'ceph' Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr respawn 8: '--default-log-to-file=false' Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr respawn 9: '--default-log-to-journald=true' Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr respawn 10: '--default-log-to-stderr=false' Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr respawn respawning with exe /usr/bin/ceph-mgr Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr respawn exe_path /proc/self/exe Nov 23 04:52:12 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:12.669+0000 7ffa4a737640 -1 mgr handle_mgr_map I was active but no longer am Nov 23 04:52:12 localhost podman[299432]: Nov 23 04:52:12 localhost systemd-logind[760]: Session 68 logged out. Waiting for processes to exit. Nov 23 04:52:12 localhost podman[299432]: 2025-11-23 09:52:12.734796446 +0000 UTC m=+0.099725674 container create e92b99b17ed9cde1c764d729c0e6116ebe61a39e4601f0a8259a5251588d4101 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_mestorf, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, version=7) Nov 23 04:52:12 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: ignoring --setuser ceph since I am not root Nov 23 04:52:12 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: ignoring --setgroup ceph since I am not root Nov 23 04:52:12 localhost ceph-mgr[286671]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Nov 23 04:52:12 localhost ceph-mgr[286671]: pidfile_write: ignore empty --pid-file Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr[py] Loading python module 'alerts' Nov 23 04:52:12 localhost podman[299432]: 2025-11-23 09:52:12.693209509 +0000 UTC m=+0.058138768 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:52:12 localhost systemd[1]: Started libpod-conmon-e92b99b17ed9cde1c764d729c0e6116ebe61a39e4601f0a8259a5251588d4101.scope. Nov 23 04:52:12 localhost systemd[1]: Started libcrun container. Nov 23 04:52:12 localhost podman[299432]: 2025-11-23 09:52:12.822730455 +0000 UTC m=+0.187659663 container init e92b99b17ed9cde1c764d729c0e6116ebe61a39e4601f0a8259a5251588d4101 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_mestorf, vcs-type=git, description=Red Hat Ceph Storage 7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, RELEASE=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.buildah.version=1.33.12, GIT_BRANCH=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:52:12 localhost podman[299432]: 2025-11-23 09:52:12.841136364 +0000 UTC m=+0.206065572 container start e92b99b17ed9cde1c764d729c0e6116ebe61a39e4601f0a8259a5251588d4101 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_mestorf, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, name=rhceph, io.openshift.tags=rhceph ceph, version=7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, release=553, io.buildah.version=1.33.12, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:52:12 localhost podman[299432]: 2025-11-23 09:52:12.842041292 +0000 UTC m=+0.206970550 container attach e92b99b17ed9cde1c764d729c0e6116ebe61a39e4601f0a8259a5251588d4101 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_mestorf, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_CLEAN=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_BRANCH=main) Nov 23 04:52:12 localhost sleepy_mestorf[299471]: 167 167 Nov 23 04:52:12 localhost systemd[1]: libpod-e92b99b17ed9cde1c764d729c0e6116ebe61a39e4601f0a8259a5251588d4101.scope: Deactivated successfully. Nov 23 04:52:12 localhost podman[299432]: 2025-11-23 09:52:12.845760677 +0000 UTC m=+0.210689885 container died e92b99b17ed9cde1c764d729c0e6116ebe61a39e4601f0a8259a5251588d4101 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_mestorf, name=rhceph, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main) Nov 23 04:52:12 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:12.859+0000 7f9d59da9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr[py] Module alerts has missing NOTIFY_TYPES member Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr[py] Loading python module 'balancer' Nov 23 04:52:12 localhost systemd[1]: tmp-crun.yNuxgv.mount: Deactivated successfully. Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr[py] Module balancer has missing NOTIFY_TYPES member Nov 23 04:52:12 localhost ceph-mgr[286671]: mgr[py] Loading python module 'cephadm' Nov 23 04:52:12 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:12.929+0000 7f9d59da9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Nov 23 04:52:12 localhost systemd[1]: var-lib-containers-storage-overlay-998d1abc23f4a8c451c2bcd0173fc2ea3904b4ef5c494f00aa9558f83ef75bce-merged.mount: Deactivated successfully. Nov 23 04:52:12 localhost podman[299476]: 2025-11-23 09:52:12.967332976 +0000 UTC m=+0.109604050 container remove e92b99b17ed9cde1c764d729c0e6116ebe61a39e4601f0a8259a5251588d4101 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_mestorf, maintainer=Guillaume Abrioux , ceph=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, release=553, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vcs-type=git, io.buildah.version=1.33.12, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7) Nov 23 04:52:12 localhost systemd[1]: libpod-conmon-e92b99b17ed9cde1c764d729c0e6116ebe61a39e4601f0a8259a5251588d4101.scope: Deactivated successfully. Nov 23 04:52:13 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:13 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:13 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:13 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' Nov 23 04:52:13 localhost ceph-mon[293353]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 23 04:52:13 localhost ceph-mon[293353]: Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:52:13 localhost ceph-mon[293353]: from='client.? 172.18.0.200:0/885747258' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 23 04:52:13 localhost ceph-mon[293353]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 23 04:52:13 localhost ceph-mon[293353]: Activating manager daemon np0005532585.gzafiw Nov 23 04:52:13 localhost ceph-mon[293353]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Nov 23 04:52:13 localhost ceph-mon[293353]: Manager daemon np0005532585.gzafiw is now available Nov 23 04:52:13 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532582.localdomain.devices.0"} : dispatch Nov 23 04:52:13 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532582.localdomain.devices.0"}]': finished Nov 23 04:52:13 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532582.localdomain.devices.0"} : dispatch Nov 23 04:52:13 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532582.localdomain.devices.0"}]': finished Nov 23 04:52:13 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532585.gzafiw/mirror_snapshot_schedule"} : dispatch Nov 23 04:52:13 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532585.gzafiw/trash_purge_schedule"} : dispatch Nov 23 04:52:13 localhost sshd[299492]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:52:13 localhost systemd[1]: session-68.scope: Deactivated successfully. Nov 23 04:52:13 localhost systemd[1]: session-68.scope: Consumed 19.777s CPU time. Nov 23 04:52:13 localhost systemd-logind[760]: Removed session 68. Nov 23 04:52:13 localhost systemd-logind[760]: New session 69 of user ceph-admin. Nov 23 04:52:13 localhost systemd[1]: Started Session 69 of User ceph-admin. Nov 23 04:52:13 localhost ceph-mgr[286671]: mgr[py] Loading python module 'crash' Nov 23 04:52:13 localhost ceph-mgr[286671]: mgr[py] Module crash has missing NOTIFY_TYPES member Nov 23 04:52:13 localhost ceph-mgr[286671]: mgr[py] Loading python module 'dashboard' Nov 23 04:52:13 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:13.555+0000 7f9d59da9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Nov 23 04:52:14 localhost ceph-mgr[286671]: mgr[py] Loading python module 'devicehealth' Nov 23 04:52:14 localhost ceph-mon[293353]: removing stray HostCache host record np0005532582.localdomain.devices.0 Nov 23 04:52:14 localhost ceph-mgr[286671]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Nov 23 04:52:14 localhost ceph-mgr[286671]: mgr[py] Loading python module 'diskprediction_local' Nov 23 04:52:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:14.111+0000 7f9d59da9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Nov 23 04:52:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Nov 23 04:52:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Nov 23 04:52:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: from numpy import show_config as show_numpy_config Nov 23 04:52:14 localhost ceph-mgr[286671]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Nov 23 04:52:14 localhost ceph-mgr[286671]: mgr[py] Loading python module 'influx' Nov 23 04:52:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:14.242+0000 7f9d59da9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Nov 23 04:52:14 localhost podman[299613]: 2025-11-23 09:52:14.282134331 +0000 UTC m=+0.106324719 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_CLEAN=True, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.expose-services=, RELEASE=main, vcs-type=git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:52:14 localhost ceph-mgr[286671]: mgr[py] Module influx has missing NOTIFY_TYPES member Nov 23 04:52:14 localhost ceph-mgr[286671]: mgr[py] Loading python module 'insights' Nov 23 04:52:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:14.299+0000 7f9d59da9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Nov 23 04:52:14 localhost ceph-mgr[286671]: mgr[py] Loading python module 'iostat' Nov 23 04:52:14 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:14 localhost podman[299613]: 2025-11-23 09:52:14.383290959 +0000 UTC m=+0.207481337 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, ceph=True, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=553, vcs-type=git, architecture=x86_64) Nov 23 04:52:14 localhost ceph-mgr[286671]: mgr[py] Module iostat has missing NOTIFY_TYPES member Nov 23 04:52:14 localhost ceph-mgr[286671]: mgr[py] Loading python module 'k8sevents' Nov 23 04:52:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:14.416+0000 7f9d59da9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Nov 23 04:52:14 localhost ceph-mgr[286671]: mgr[py] Loading python module 'localpool' Nov 23 04:52:14 localhost ceph-mgr[286671]: mgr[py] Loading python module 'mds_autoscaler' Nov 23 04:52:14 localhost ceph-mgr[286671]: mgr[py] Loading python module 'mirroring' Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'nfs' Nov 23 04:52:15 localhost ceph-mon[293353]: [23/Nov/2025:09:52:14] ENGINE Bus STARTING Nov 23 04:52:15 localhost ceph-mon[293353]: [23/Nov/2025:09:52:14] ENGINE Serving on http://172.18.0.107:8765 Nov 23 04:52:15 localhost ceph-mon[293353]: [23/Nov/2025:09:52:14] ENGINE Serving on https://172.18.0.107:7150 Nov 23 04:52:15 localhost ceph-mon[293353]: [23/Nov/2025:09:52:14] ENGINE Client ('172.18.0.107', 35764) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 23 04:52:15 localhost ceph-mon[293353]: [23/Nov/2025:09:52:14] ENGINE Bus STARTED Nov 23 04:52:15 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:15 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:15 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:15 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:15 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:15 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:15 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:15.163+0000 7f9d59da9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Module nfs has missing NOTIFY_TYPES member Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'orchestrator' Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'osd_perf_query' Nov 23 04:52:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:15.304+0000 7f9d59da9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'osd_support' Nov 23 04:52:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:15.367+0000 7f9d59da9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'pg_autoscaler' Nov 23 04:52:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:15.423+0000 7f9d59da9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'progress' Nov 23 04:52:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:15.490+0000 7f9d59da9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Module progress has missing NOTIFY_TYPES member Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'prometheus' Nov 23 04:52:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:15.549+0000 7f9d59da9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'rbd_support' Nov 23 04:52:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:15.846+0000 7f9d59da9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Nov 23 04:52:15 localhost ceph-mgr[286671]: mgr[py] Loading python module 'restful' Nov 23 04:52:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:15.928+0000 7f9d59da9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Nov 23 04:52:16 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:16 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:16 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:16 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config rm", "who": "osd/host:np0005532583", "name": "osd_memory_target"} : dispatch Nov 23 04:52:16 localhost ceph-mgr[286671]: mgr[py] Loading python module 'rgw' Nov 23 04:52:16 localhost ceph-mgr[286671]: mgr[py] Module rgw has missing NOTIFY_TYPES member Nov 23 04:52:16 localhost ceph-mgr[286671]: mgr[py] Loading python module 'rook' Nov 23 04:52:16 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:16.252+0000 7f9d59da9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Nov 23 04:52:16 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:16 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:16 localhost ceph-mgr[286671]: mgr[py] Module rook has missing NOTIFY_TYPES member Nov 23 04:52:16 localhost ceph-mgr[286671]: mgr[py] Loading python module 'selftest' Nov 23 04:52:16 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:16.678+0000 7f9d59da9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Nov 23 04:52:16 localhost ceph-mgr[286671]: mgr[py] Module selftest has missing NOTIFY_TYPES member Nov 23 04:52:16 localhost ceph-mgr[286671]: mgr[py] Loading python module 'snap_schedule' Nov 23 04:52:16 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:16.740+0000 7f9d59da9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Nov 23 04:52:16 localhost ceph-mgr[286671]: mgr[py] Loading python module 'stats' Nov 23 04:52:16 localhost ceph-mgr[286671]: mgr[py] Loading python module 'status' Nov 23 04:52:16 localhost ceph-mgr[286671]: mgr[py] Module status has missing NOTIFY_TYPES member Nov 23 04:52:16 localhost ceph-mgr[286671]: mgr[py] Loading python module 'telegraf' Nov 23 04:52:16 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:16.931+0000 7f9d59da9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Nov 23 04:52:16 localhost ceph-mgr[286671]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Nov 23 04:52:16 localhost ceph-mgr[286671]: mgr[py] Loading python module 'telemetry' Nov 23 04:52:16 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:16.989+0000 7f9d59da9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Nov 23 04:52:17 localhost podman[239764]: time="2025-11-23T09:52:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:52:17 localhost podman[239764]: @ - - [23/Nov/2025:09:52:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:52:17 localhost ceph-mon[293353]: mon.np0005532584@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:52:17 localhost ceph-mgr[286671]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Nov 23 04:52:17 localhost ceph-mgr[286671]: mgr[py] Loading python module 'test_orchestrator' Nov 23 04:52:17 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:17.125+0000 7f9d59da9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Nov 23 04:52:17 localhost podman[239764]: @ - - [23/Nov/2025:09:52:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18223 "" "Go-http-client/1.1" Nov 23 04:52:17 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:17 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:17 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 23 04:52:17 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:17 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:17 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 23 04:52:17 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:17 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 23 04:52:17 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:17 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 23 04:52:17 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 23 04:52:17 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 23 04:52:17 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:52:17 localhost ceph-mgr[286671]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Nov 23 04:52:17 localhost ceph-mgr[286671]: mgr[py] Loading python module 'volumes' Nov 23 04:52:17 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:17.281+0000 7f9d59da9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Nov 23 04:52:17 localhost ceph-mgr[286671]: mgr[py] Module volumes has missing NOTIFY_TYPES member Nov 23 04:52:17 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:17.465+0000 7f9d59da9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Nov 23 04:52:17 localhost ceph-mgr[286671]: mgr[py] Loading python module 'zabbix' Nov 23 04:52:17 localhost ceph-mgr[286671]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Nov 23 04:52:17 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:52:17.525+0000 7f9d59da9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Nov 23 04:52:17 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55fbaf007600 mon_map magic: 0 from mon.2 v2:172.18.0.103:3300/0 Nov 23 04:52:17 localhost ceph-mgr[286671]: client.0 ms_handle_reset on v2:172.18.0.107:6810/4027327596 Nov 23 04:52:18 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532585.localdomain to 836.6M Nov 23 04:52:18 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:52:18 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532586.localdomain to 836.6M Nov 23 04:52:18 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532584.localdomain to 836.6M Nov 23 04:52:18 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Nov 23 04:52:18 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:52:18 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf Nov 23 04:52:18 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:52:18 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:52:18 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:52:18 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:52:18 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:52:18 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:52:18 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:52:18 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:19 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:52:19 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:52:19 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:52:19 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:52:19 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:19 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:20 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:52:20 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:52:20 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:52:20 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:52:20 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:20 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:20 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:20 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:20 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:20 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:20 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:20 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 23 04:52:20 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:20 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:20 localhost podman[300566]: Nov 23 04:52:20 localhost podman[300566]: 2025-11-23 09:52:20.471237012 +0000 UTC m=+0.086407433 container create 157e50ac82b255176198415a9eff4ef62dd5ca4a5c84031f03a298ba9b565cc3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_torvalds, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, name=rhceph, RELEASE=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , version=7) Nov 23 04:52:20 localhost systemd[1]: Started libpod-conmon-157e50ac82b255176198415a9eff4ef62dd5ca4a5c84031f03a298ba9b565cc3.scope. Nov 23 04:52:20 localhost systemd[1]: Started libcrun container. Nov 23 04:52:20 localhost podman[300566]: 2025-11-23 09:52:20.436254619 +0000 UTC m=+0.051425050 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:52:20 localhost podman[300566]: 2025-11-23 09:52:20.542122643 +0000 UTC m=+0.157293054 container init 157e50ac82b255176198415a9eff4ef62dd5ca4a5c84031f03a298ba9b565cc3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_torvalds, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , RELEASE=main, release=553, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, architecture=x86_64, com.redhat.component=rhceph-container, version=7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, distribution-scope=public, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55) Nov 23 04:52:20 localhost podman[300566]: 2025-11-23 09:52:20.552229076 +0000 UTC m=+0.167399457 container start 157e50ac82b255176198415a9eff4ef62dd5ca4a5c84031f03a298ba9b565cc3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_torvalds, release=553, version=7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, ceph=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, maintainer=Guillaume Abrioux ) Nov 23 04:52:20 localhost podman[300566]: 2025-11-23 09:52:20.552430382 +0000 UTC m=+0.167600803 container attach 157e50ac82b255176198415a9eff4ef62dd5ca4a5c84031f03a298ba9b565cc3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_torvalds, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, release=553, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, CEPH_POINT_RELEASE=, RELEASE=main, architecture=x86_64, name=rhceph, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:52:20 localhost clever_torvalds[300582]: 167 167 Nov 23 04:52:20 localhost systemd[1]: libpod-157e50ac82b255176198415a9eff4ef62dd5ca4a5c84031f03a298ba9b565cc3.scope: Deactivated successfully. Nov 23 04:52:20 localhost podman[300566]: 2025-11-23 09:52:20.555452146 +0000 UTC m=+0.170622547 container died 157e50ac82b255176198415a9eff4ef62dd5ca4a5c84031f03a298ba9b565cc3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_torvalds, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_BRANCH=main, com.redhat.component=rhceph-container, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=7) Nov 23 04:52:20 localhost podman[300587]: 2025-11-23 09:52:20.663090343 +0000 UTC m=+0.092404708 container remove 157e50ac82b255176198415a9eff4ef62dd5ca4a5c84031f03a298ba9b565cc3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_torvalds, GIT_BRANCH=main, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., name=rhceph, CEPH_POINT_RELEASE=, version=7, ceph=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , RELEASE=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553) Nov 23 04:52:20 localhost systemd[1]: libpod-conmon-157e50ac82b255176198415a9eff4ef62dd5ca4a5c84031f03a298ba9b565cc3.scope: Deactivated successfully. Nov 23 04:52:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:52:21 localhost podman[300610]: 2025-11-23 09:52:21.408216923 +0000 UTC m=+0.088958851 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7) Nov 23 04:52:21 localhost ceph-mon[293353]: Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:52:21 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:21 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:21 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:21 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:21 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 23 04:52:21 localhost podman[300610]: 2025-11-23 09:52:21.426396466 +0000 UTC m=+0.107138394 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9) Nov 23 04:52:21 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:52:21 localhost systemd[1]: var-lib-containers-storage-overlay-914eb83173d3da73fde56814d1507fe81fbbc50b43fd8980a0a520a388649f80-merged.mount: Deactivated successfully. Nov 23 04:52:22 localhost ceph-mon[293353]: mon.np0005532584@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:52:22 localhost ceph-mon[293353]: Reconfiguring daemon osd.0 on np0005532585.localdomain Nov 23 04:52:22 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:22 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:22 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:22 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:22 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 23 04:52:22 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:22 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:23 localhost ceph-mon[293353]: Reconfiguring daemon osd.3 on np0005532585.localdomain Nov 23 04:52:23 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:23 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:23 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:23 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:23 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:23 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 23 04:52:23 localhost ceph-mon[293353]: Reconfiguring daemon osd.1 on np0005532586.localdomain Nov 23 04:52:24 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:24 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:24.954539) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891544954580, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2390, "num_deletes": 267, "total_data_size": 7367746, "memory_usage": 7671600, "flush_reason": "Manual Compaction"} Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891544971929, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 4100031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14363, "largest_seqno": 16748, "table_properties": {"data_size": 4090843, "index_size": 5437, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 23448, "raw_average_key_size": 21, "raw_value_size": 4070661, "raw_average_value_size": 3811, "num_data_blocks": 230, "num_entries": 1068, "num_filter_entries": 1068, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891492, "oldest_key_time": 1763891492, "file_creation_time": 1763891544, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 17489 microseconds, and 8042 cpu microseconds. Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:24.972026) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 4100031 bytes OK Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:24.972051) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:24.974150) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:24.974173) EVENT_LOG_v1 {"time_micros": 1763891544974165, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:24.974197) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 7356005, prev total WAL file size 7360785, number of live WAL files 2. Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:24.975611) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323634' seq:72057594037927935, type:22 .. '6B760031353238' seq:0, type:0; will stop at (end) Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(4003KB)], [21(15MB)] Nov 23 04:52:24 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891544975668, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 20221594, "oldest_snapshot_seqno": -1} Nov 23 04:52:24 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:24 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:24 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:24 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:24 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 23 04:52:24 localhost ceph-mon[293353]: Reconfiguring daemon osd.4 on np0005532586.localdomain Nov 23 04:52:25 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 11184 keys, 19357794 bytes, temperature: kUnknown Nov 23 04:52:25 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891545053028, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 19357794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19290088, "index_size": 38677, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27973, "raw_key_size": 299582, "raw_average_key_size": 26, "raw_value_size": 19095170, "raw_average_value_size": 1707, "num_data_blocks": 1480, "num_entries": 11184, "num_filter_entries": 11184, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763891544, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Nov 23 04:52:25 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:52:25 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:25.053429) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 19357794 bytes Nov 23 04:52:25 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:25.055799) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 260.8 rd, 249.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 15.4 +0.0 blob) out(18.5 +0.0 blob), read-write-amplify(9.7) write-amplify(4.7) OK, records in: 11680, records dropped: 496 output_compression: NoCompression Nov 23 04:52:25 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:25.055881) EVENT_LOG_v1 {"time_micros": 1763891545055853, "job": 10, "event": "compaction_finished", "compaction_time_micros": 77523, "compaction_time_cpu_micros": 42892, "output_level": 6, "num_output_files": 1, "total_output_size": 19357794, "num_input_records": 11680, "num_output_records": 11184, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 04:52:25 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:52:25 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891545057238, "job": 10, "event": "table_file_deletion", "file_number": 23} Nov 23 04:52:25 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:52:25 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891545059936, "job": 10, "event": "table_file_deletion", "file_number": 21} Nov 23 04:52:25 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:24.975528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:52:25 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:25.060065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:52:25 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:25.060072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:52:25 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:25.060075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:52:25 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:25.060078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:52:25 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:25.060081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:52:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:52:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:52:25 localhost podman[300649]: 2025-11-23 09:52:25.418813884 +0000 UTC m=+0.084921277 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Nov 23 04:52:25 localhost podman[300649]: 2025-11-23 09:52:25.432404294 +0000 UTC m=+0.098511707 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 23 04:52:25 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:52:25 localhost podman[300650]: 2025-11-23 09:52:25.525782621 +0000 UTC m=+0.188519840 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:52:25 localhost podman[300650]: 2025-11-23 09:52:25.569272636 +0000 UTC m=+0.232009905 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:52:25 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:52:26 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:26 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:26 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:26 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:26 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:52:26 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:26 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:27 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:27 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:52:27 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:27 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:27 localhost ceph-mon[293353]: mon.np0005532584@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:52:28 localhost ceph-mon[293353]: Saving service mon spec with placement label:mon Nov 23 04:52:28 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:52:28 localhost ceph-mon[293353]: Reconfiguring mon.np0005532583 (monmap changed)... Nov 23 04:52:28 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532583 on np0005532583.localdomain Nov 23 04:52:28 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:28 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:28 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:28 localhost podman[300763]: Nov 23 04:52:28 localhost podman[300763]: 2025-11-23 09:52:28.849215164 +0000 UTC m=+0.082812112 container create 8ea90f1c1dd8189dde84a188676f56cb24f9e36b7a67b21630e751d655409b0a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_bassi, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, RELEASE=main, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.33.12) Nov 23 04:52:28 localhost systemd[1]: Started libpod-conmon-8ea90f1c1dd8189dde84a188676f56cb24f9e36b7a67b21630e751d655409b0a.scope. Nov 23 04:52:28 localhost podman[300763]: 2025-11-23 09:52:28.814315884 +0000 UTC m=+0.047912882 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:52:28 localhost systemd[1]: Started libcrun container. Nov 23 04:52:28 localhost podman[300763]: 2025-11-23 09:52:28.945965905 +0000 UTC m=+0.179562853 container init 8ea90f1c1dd8189dde84a188676f56cb24f9e36b7a67b21630e751d655409b0a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_bassi, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.buildah.version=1.33.12, name=rhceph) Nov 23 04:52:28 localhost podman[300763]: 2025-11-23 09:52:28.957064348 +0000 UTC m=+0.190661306 container start 8ea90f1c1dd8189dde84a188676f56cb24f9e36b7a67b21630e751d655409b0a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_bassi, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_CLEAN=True, version=7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, distribution-scope=public) Nov 23 04:52:28 localhost podman[300763]: 2025-11-23 09:52:28.957710338 +0000 UTC m=+0.191307296 container attach 8ea90f1c1dd8189dde84a188676f56cb24f9e36b7a67b21630e751d655409b0a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_bassi, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, release=553, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, ceph=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_BRANCH=main, vcs-type=git, name=rhceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:52:28 localhost vigorous_bassi[300779]: 167 167 Nov 23 04:52:28 localhost systemd[1]: libpod-8ea90f1c1dd8189dde84a188676f56cb24f9e36b7a67b21630e751d655409b0a.scope: Deactivated successfully. Nov 23 04:52:28 localhost podman[300763]: 2025-11-23 09:52:28.962336711 +0000 UTC m=+0.195933699 container died 8ea90f1c1dd8189dde84a188676f56cb24f9e36b7a67b21630e751d655409b0a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_bassi, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , distribution-scope=public, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, release=553, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, CEPH_POINT_RELEASE=, version=7, ceph=True) Nov 23 04:52:29 localhost podman[300784]: 2025-11-23 09:52:29.068592937 +0000 UTC m=+0.091529102 container remove 8ea90f1c1dd8189dde84a188676f56cb24f9e36b7a67b21630e751d655409b0a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_bassi, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, name=rhceph, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, version=7, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, release=553, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.buildah.version=1.33.12, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 04:52:29 localhost systemd[1]: libpod-conmon-8ea90f1c1dd8189dde84a188676f56cb24f9e36b7a67b21630e751d655409b0a.scope: Deactivated successfully. Nov 23 04:52:29 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:29 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:29 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:52:29 localhost ceph-mon[293353]: Reconfiguring mon.np0005532584 (monmap changed)... Nov 23 04:52:29 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain Nov 23 04:52:29 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:29 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:29 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:52:29 localhost systemd[1]: var-lib-containers-storage-overlay-67a84532f466fc5fde5c1b9c1030fe0a3464201a7670a456ebc02af641d065ef-merged.mount: Deactivated successfully. Nov 23 04:52:30 localhost ceph-mon[293353]: Reconfiguring mon.np0005532586 (monmap changed)... Nov 23 04:52:30 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532586 on np0005532586.localdomain Nov 23 04:52:30 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:30 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:30 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:32 localhost ceph-mon[293353]: mon.np0005532584@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:52:32 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:32 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:34 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:52:35 localhost systemd[1]: tmp-crun.pz3dm5.mount: Deactivated successfully. Nov 23 04:52:35 localhost podman[300802]: 2025-11-23 09:52:35.913864987 +0000 UTC m=+0.097791305 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 23 04:52:35 localhost podman[300802]: 2025-11-23 09:52:35.92140011 +0000 UTC m=+0.105326418 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:52:35 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:52:36 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:36 localhost openstack_network_exporter[241732]: ERROR 09:52:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:52:36 localhost openstack_network_exporter[241732]: ERROR 09:52:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:52:36 localhost openstack_network_exporter[241732]: ERROR 09:52:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:52:36 localhost openstack_network_exporter[241732]: ERROR 09:52:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:52:36 localhost openstack_network_exporter[241732]: Nov 23 04:52:36 localhost openstack_network_exporter[241732]: ERROR 09:52:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:52:36 localhost openstack_network_exporter[241732]: Nov 23 04:52:37 localhost ceph-mon[293353]: mon.np0005532584@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:52:38 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) Nov 23 04:52:38 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/1137795569' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch Nov 23 04:52:38 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:39.457783) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891559457844, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 640, "num_deletes": 251, "total_data_size": 669873, "memory_usage": 682200, "flush_reason": "Manual Compaction"} Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891559463364, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 421927, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16753, "largest_seqno": 17388, "table_properties": {"data_size": 418588, "index_size": 1194, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 9043, "raw_average_key_size": 21, "raw_value_size": 411513, "raw_average_value_size": 970, "num_data_blocks": 48, "num_entries": 424, "num_filter_entries": 424, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891544, "oldest_key_time": 1763891544, "file_creation_time": 1763891559, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 5633 microseconds, and 2460 cpu microseconds. Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:39.463415) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 421927 bytes OK Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:39.463442) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:39.465295) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:39.465333) EVENT_LOG_v1 {"time_micros": 1763891559465310, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:39.465363) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 666162, prev total WAL file size 666162, number of live WAL files 2. Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:39.466064) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131303434' seq:72057594037927935, type:22 .. '7061786F73003131323936' seq:0, type:0; will stop at (end) Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(412KB)], [24(18MB)] Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891559466139, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 19779721, "oldest_snapshot_seqno": -1} Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 11083 keys, 16904335 bytes, temperature: kUnknown Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891559540836, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 16904335, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16839347, "index_size": 36215, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27717, "raw_key_size": 298179, "raw_average_key_size": 26, "raw_value_size": 16648175, "raw_average_value_size": 1502, "num_data_blocks": 1373, "num_entries": 11083, "num_filter_entries": 11083, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763891559, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:39.541425) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 16904335 bytes Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:39.547069) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 263.8 rd, 225.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 18.5 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(86.9) write-amplify(40.1) OK, records in: 11608, records dropped: 525 output_compression: NoCompression Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:39.547104) EVENT_LOG_v1 {"time_micros": 1763891559547089, "job": 12, "event": "compaction_finished", "compaction_time_micros": 74982, "compaction_time_cpu_micros": 46497, "output_level": 6, "num_output_files": 1, "total_output_size": 16904335, "num_input_records": 11608, "num_output_records": 11083, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891559547318, "job": 12, "event": "table_file_deletion", "file_number": 26} Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891559549980, "job": 12, "event": "table_file_deletion", "file_number": 24} Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:39.465925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:39.550113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:39.550120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:39.550123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:39.550127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:52:39 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:52:39.550130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:52:40 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:40 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:41 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55fbaf006f20 mon_map magic: 0 from mon.2 v2:172.18.0.103:3300/0 Nov 23 04:52:41 localhost ceph-mon[293353]: mon.np0005532584@2(peon) e12 my rank is now 1 (was 2) Nov 23 04:52:41 localhost ceph-mgr[286671]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Nov 23 04:52:41 localhost ceph-mgr[286671]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Nov 23 04:52:41 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55fbaf007600 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Nov 23 04:52:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:52:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:52:41 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : mon.np0005532584 calling monitor election Nov 23 04:52:41 localhost ceph-mon[293353]: paxos.1).electionLogic(48) init, last seen epoch 48 Nov 23 04:52:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:52:41 localhost ceph-mon[293353]: mon.np0005532584@1(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:52:41 localhost ceph-mon[293353]: mon.np0005532584@1(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:52:41 localhost ceph-mon[293353]: mon.np0005532584@1(peon) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:52:41 localhost ceph-mon[293353]: Remove daemons mon.np0005532583 Nov 23 04:52:41 localhost ceph-mon[293353]: Safe to remove mon.np0005532583: new quorum should be ['np0005532586', 'np0005532584'] (from ['np0005532586', 'np0005532584']) Nov 23 04:52:41 localhost ceph-mon[293353]: Removing monitor np0005532583 from monmap... Nov 23 04:52:41 localhost ceph-mon[293353]: Removing daemon mon.np0005532583 from np0005532583.localdomain -- ports [] Nov 23 04:52:41 localhost ceph-mon[293353]: mon.np0005532586 calling monitor election Nov 23 04:52:41 localhost ceph-mon[293353]: mon.np0005532584 calling monitor election Nov 23 04:52:41 localhost ceph-mon[293353]: mon.np0005532586 is new leader, mons np0005532586,np0005532584 in quorum (ranks 0,1) Nov 23 04:52:41 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:52:41 localhost ceph-mon[293353]: overall HEALTH_OK Nov 23 04:52:41 localhost podman[300820]: 2025-11-23 09:52:41.916449039 +0000 UTC m=+0.098394943 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:52:41 localhost podman[300820]: 2025-11-23 09:52:41.94787857 +0000 UTC m=+0.129824474 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:52:41 localhost podman[300821]: 2025-11-23 09:52:41.962163562 +0000 UTC m=+0.140040491 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 04:52:41 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:52:41 localhost podman[300821]: 2025-11-23 09:52:41.974580517 +0000 UTC m=+0.152457396 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:52:41 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:52:42 localhost podman[300822]: 2025-11-23 09:52:42.068448309 +0000 UTC m=+0.243469799 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:52:42 localhost ceph-mon[293353]: mon.np0005532584@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:52:42 localhost podman[300822]: 2025-11-23 09:52:42.156485822 +0000 UTC m=+0.331507332 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3) Nov 23 04:52:42 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:52:42 localhost ceph-mon[293353]: mon.np0005532584@1(peon) e12 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:42 localhost ceph-mon[293353]: mon.np0005532584@1(peon) e12 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 23 04:52:42 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55fbaf007080 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Nov 23 04:52:42 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : mon.np0005532584 calling monitor election Nov 23 04:52:42 localhost ceph-mon[293353]: paxos.1).electionLogic(50) init, last seen epoch 50 Nov 23 04:52:42 localhost ceph-mon[293353]: mon.np0005532584@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:52:42 localhost ceph-mon[293353]: mon.np0005532584@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:52:47 localhost podman[239764]: time="2025-11-23T09:52:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:52:47 localhost podman[239764]: @ - - [23/Nov/2025:09:52:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:52:47 localhost podman[239764]: @ - - [23/Nov/2025:09:52:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18218 "" "Go-http-client/1.1" Nov 23 04:52:47 localhost ceph-mon[293353]: mon.np0005532584@1(peon) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:52:47 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:52:47 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:52:47 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:52:47 localhost ceph-mon[293353]: mon.np0005532586 calling monitor election Nov 23 04:52:47 localhost ceph-mon[293353]: mon.np0005532584 calling monitor election Nov 23 04:52:47 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:52:47 localhost ceph-mon[293353]: mon.np0005532586 is new leader, mons np0005532586,np0005532584 in quorum (ranks 0,1) Nov 23 04:52:47 localhost ceph-mon[293353]: Health check failed: 1/3 mons down, quorum np0005532586,np0005532584 (MON_DOWN) Nov 23 04:52:47 localhost ceph-mon[293353]: Health detail: HEALTH_WARN 1/3 mons down, quorum np0005532586,np0005532584 Nov 23 04:52:47 localhost ceph-mon[293353]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005532586,np0005532584 Nov 23 04:52:47 localhost ceph-mon[293353]: mon.np0005532585 (rank 2) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum) Nov 23 04:52:47 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:47 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:48 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:48 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:48 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:48 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:48 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:48 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:48 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:48 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:52:48 localhost ceph-mon[293353]: Deploying daemon mon.np0005532583 on np0005532583.localdomain Nov 23 04:52:48 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:48 localhost ceph-mon[293353]: Removed label mon from host np0005532583.localdomain Nov 23 04:52:49 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : mon.np0005532584 calling monitor election Nov 23 04:52:49 localhost ceph-mon[293353]: paxos.1).electionLogic(52) init, last seen epoch 52 Nov 23 04:52:49 localhost ceph-mon[293353]: mon.np0005532584@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:52:49 localhost ceph-mon[293353]: mon.np0005532584@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:52:49 localhost ceph-mon[293353]: mon.np0005532584@1(peon) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:52:49 localhost ceph-mon[293353]: mon.np0005532585 calling monitor election Nov 23 04:52:49 localhost ceph-mon[293353]: mon.np0005532586 calling monitor election Nov 23 04:52:49 localhost ceph-mon[293353]: mon.np0005532584 calling monitor election Nov 23 04:52:49 localhost ceph-mon[293353]: mon.np0005532585 calling monitor election Nov 23 04:52:49 localhost ceph-mon[293353]: mon.np0005532586 is new leader, mons np0005532586,np0005532584,np0005532585 in quorum (ranks 0,1,2) Nov 23 04:52:49 localhost ceph-mon[293353]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005532586,np0005532584) Nov 23 04:52:49 localhost ceph-mon[293353]: Cluster is now healthy Nov 23 04:52:49 localhost ceph-mon[293353]: overall HEALTH_OK Nov 23 04:52:50 localhost ceph-mon[293353]: mon.np0005532584@1(peon) e13 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Nov 23 04:52:50 localhost ceph-mon[293353]: mon.np0005532584@1(peon) e13 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Nov 23 04:52:50 localhost ceph-mon[293353]: mon.np0005532584@1(peon) e13 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Nov 23 04:52:50 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55fbaf0071e0 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Nov 23 04:52:50 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : mon.np0005532584 calling monitor election Nov 23 04:52:50 localhost ceph-mon[293353]: paxos.1).electionLogic(54) init, last seen epoch 54 Nov 23 04:52:50 localhost ceph-mon[293353]: mon.np0005532584@1(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:52:50 localhost ceph-mon[293353]: mon.np0005532584@1(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:52:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:52:51 localhost podman[301226]: 2025-11-23 09:52:51.898301331 +0000 UTC m=+0.085065371 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9) Nov 23 04:52:51 localhost podman[301226]: 2025-11-23 09:52:51.916458806 +0000 UTC m=+0.103222826 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9) Nov 23 04:52:51 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:52:54 localhost ceph-mon[293353]: mon.np0005532584@1(electing) e14 handle_auth_request failed to assign global_id Nov 23 04:52:55 localhost ceph-mon[293353]: mon.np0005532584@1(electing) e14 handle_auth_request failed to assign global_id Nov 23 04:52:55 localhost ceph-mon[293353]: mon.np0005532584@1(electing) e14 handle_auth_request failed to assign global_id Nov 23 04:52:55 localhost ceph-mon[293353]: mon.np0005532584@1(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:52:55 localhost ceph-mds[285431]: mds.beacon.mds.np0005532584.aoxjmw missed beacon ack from the monitors Nov 23 04:52:55 localhost ceph-mon[293353]: mon.np0005532584@1(peon) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:52:55 localhost ceph-mon[293353]: mon.np0005532586 calling monitor election Nov 23 04:52:55 localhost ceph-mon[293353]: mon.np0005532584 calling monitor election Nov 23 04:52:55 localhost ceph-mon[293353]: mon.np0005532585 calling monitor election Nov 23 04:52:55 localhost ceph-mon[293353]: mon.np0005532583 calling monitor election Nov 23 04:52:55 localhost ceph-mon[293353]: mon.np0005532586 is new leader, mons np0005532586,np0005532584,np0005532585,np0005532583 in quorum (ranks 0,1,2,3) Nov 23 04:52:55 localhost ceph-mon[293353]: overall HEALTH_OK Nov 23 04:52:55 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:55 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:55 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:52:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:52:55 localhost podman[301263]: 2025-11-23 09:52:55.897248586 +0000 UTC m=+0.090088784 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118) Nov 23 04:52:55 localhost podman[301263]: 2025-11-23 09:52:55.913569725 +0000 UTC m=+0.106409903 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 23 04:52:55 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:52:56 localhost podman[301264]: 2025-11-23 09:52:56.003602446 +0000 UTC m=+0.191845494 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:52:56 localhost podman[301264]: 2025-11-23 09:52:56.018761059 +0000 UTC m=+0.207004147 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:52:56 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:52:56 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:52:56 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf Nov 23 04:52:56 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:52:56 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:52:56 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:52:56 localhost ceph-mon[293353]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:52:56 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:52:56 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:57 localhost ceph-mon[293353]: mon.np0005532584@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:52:57 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:52:57 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:52:57 localhost ceph-mon[293353]: Removed label mgr from host np0005532583.localdomain Nov 23 04:52:57 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:57 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:57 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:57 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:57 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:57 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:57 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:57 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:57 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:57 localhost ceph-mon[293353]: Removing daemon mgr.np0005532583.orhywt from np0005532583.localdomain -- ports [8765] Nov 23 04:52:58 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:52:58 localhost ceph-mon[293353]: Removed label _admin from host np0005532583.localdomain Nov 23 04:52:59 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55fbaf006f20 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Nov 23 04:52:59 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : mon.np0005532584 calling monitor election Nov 23 04:52:59 localhost ceph-mon[293353]: paxos.1).electionLogic(58) init, last seen epoch 58 Nov 23 04:52:59 localhost ceph-mon[293353]: mon.np0005532584@1(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:52:59 localhost ceph-mon[293353]: mon.np0005532584@1(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:52:59 localhost ceph-mon[293353]: mon.np0005532584@1(peon) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:53:00 localhost ceph-mon[293353]: Removing key for mgr.np0005532583.orhywt Nov 23 04:53:00 localhost ceph-mon[293353]: Safe to remove mon.np0005532583: new quorum should be ['np0005532586', 'np0005532584', 'np0005532585'] (from ['np0005532586', 'np0005532584', 'np0005532585']) Nov 23 04:53:00 localhost ceph-mon[293353]: Removing monitor np0005532583 from monmap... Nov 23 04:53:00 localhost ceph-mon[293353]: Removing daemon mon.np0005532583 from np0005532583.localdomain -- ports [] Nov 23 04:53:00 localhost ceph-mon[293353]: mon.np0005532586 calling monitor election Nov 23 04:53:00 localhost ceph-mon[293353]: mon.np0005532584 calling monitor election Nov 23 04:53:00 localhost ceph-mon[293353]: mon.np0005532586 is new leader, mons np0005532586,np0005532584,np0005532585 in quorum (ranks 0,1,2) Nov 23 04:53:00 localhost ceph-mon[293353]: mon.np0005532585 calling monitor election Nov 23 04:53:00 localhost ceph-mon[293353]: overall HEALTH_OK Nov 23 04:53:01 localhost nova_compute[280939]: 2025-11-23 09:53:01.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:53:01 localhost nova_compute[280939]: 2025-11-23 09:53:01.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:53:01 localhost nova_compute[280939]: 2025-11-23 09:53:01.134 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Nov 23 04:53:01 localhost nova_compute[280939]: 2025-11-23 09:53:01.150 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Nov 23 04:53:01 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:01 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:01 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:02 localhost ceph-mon[293353]: mon.np0005532584@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:53:02 localhost nova_compute[280939]: 2025-11-23 09:53:02.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:53:02 localhost nova_compute[280939]: 2025-11-23 09:53:02.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:53:02 localhost nova_compute[280939]: 2025-11-23 09:53:02.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:53:03 localhost nova_compute[280939]: 2025-11-23 09:53:03.145 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:53:03 localhost nova_compute[280939]: 2025-11-23 09:53:03.167 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:53:03 localhost nova_compute[280939]: 2025-11-23 09:53:03.167 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:53:03 localhost nova_compute[280939]: 2025-11-23 09:53:03.167 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:53:03 localhost nova_compute[280939]: 2025-11-23 09:53:03.192 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:53:03 localhost nova_compute[280939]: 2025-11-23 09:53:03.192 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:53:03 localhost nova_compute[280939]: 2025-11-23 09:53:03.193 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:53:03 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:03 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:03 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:53:03 localhost ceph-mon[293353]: Removing np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:53:03 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:53:03 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:53:03 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:53:03 localhost ceph-mon[293353]: Removing np0005532583.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:53:03 localhost ceph-mon[293353]: Removing np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:53:03 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:03 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:05 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:53:05 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:53:05 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:53:05 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:05 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:05 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:05 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:05 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:05 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:05 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:05 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:05 localhost nova_compute[280939]: 2025-11-23 09:53:05.134 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:53:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:53:06 localhost podman[302014]: 2025-11-23 09:53:06.123197689 +0000 UTC m=+0.081995617 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible) Nov 23 04:53:06 localhost nova_compute[280939]: 2025-11-23 09:53:06.128 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:53:06 localhost nova_compute[280939]: 2025-11-23 09:53:06.131 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:53:06 localhost nova_compute[280939]: 2025-11-23 09:53:06.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Nov 23 04:53:06 localhost podman[302024]: Nov 23 04:53:06 localhost podman[302014]: 2025-11-23 09:53:06.152465894 +0000 UTC m=+0.111263802 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 23 04:53:06 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:53:06 localhost podman[302024]: 2025-11-23 09:53:06.207299799 +0000 UTC m=+0.138962088 container create e45f74b8a1a993cbb537e1fb6c49ff1939775f243fb19b7e7b4029da37243e86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_wu, vcs-type=git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, name=rhceph, GIT_CLEAN=True, io.buildah.version=1.33.12, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_BRANCH=main) Nov 23 04:53:06 localhost podman[302024]: 2025-11-23 09:53:06.124153158 +0000 UTC m=+0.055815597 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:53:06 localhost systemd[1]: Started libpod-conmon-e45f74b8a1a993cbb537e1fb6c49ff1939775f243fb19b7e7b4029da37243e86.scope. Nov 23 04:53:06 localhost systemd[1]: Started libcrun container. Nov 23 04:53:06 localhost podman[302024]: 2025-11-23 09:53:06.292800733 +0000 UTC m=+0.224463012 container init e45f74b8a1a993cbb537e1fb6c49ff1939775f243fb19b7e7b4029da37243e86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_wu, name=rhceph, io.buildah.version=1.33.12, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, release=553, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Nov 23 04:53:06 localhost podman[302024]: 2025-11-23 09:53:06.302142158 +0000 UTC m=+0.233804437 container start e45f74b8a1a993cbb537e1fb6c49ff1939775f243fb19b7e7b4029da37243e86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_wu, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, ceph=True, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, vcs-type=git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=) Nov 23 04:53:06 localhost podman[302024]: 2025-11-23 09:53:06.302369015 +0000 UTC m=+0.234031294 container attach e45f74b8a1a993cbb537e1fb6c49ff1939775f243fb19b7e7b4029da37243e86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_wu, CEPH_POINT_RELEASE=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.component=rhceph-container, release=553, build-date=2025-09-24T08:57:55, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True) Nov 23 04:53:06 localhost vigilant_wu[302049]: 167 167 Nov 23 04:53:06 localhost podman[302024]: 2025-11-23 09:53:06.307005467 +0000 UTC m=+0.238667766 container died e45f74b8a1a993cbb537e1fb6c49ff1939775f243fb19b7e7b4029da37243e86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_wu, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, RELEASE=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, name=rhceph, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, distribution-scope=public, vcs-type=git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:53:06 localhost systemd[1]: libpod-e45f74b8a1a993cbb537e1fb6c49ff1939775f243fb19b7e7b4029da37243e86.scope: Deactivated successfully. Nov 23 04:53:06 localhost ceph-mon[293353]: Reconfiguring crash.np0005532583 (monmap changed)... Nov 23 04:53:06 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain Nov 23 04:53:06 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:06 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:06 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:06 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:06 localhost podman[302054]: 2025-11-23 09:53:06.405895539 +0000 UTC m=+0.088870308 container remove e45f74b8a1a993cbb537e1fb6c49ff1939775f243fb19b7e7b4029da37243e86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_wu, name=rhceph, architecture=x86_64, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main) Nov 23 04:53:06 localhost systemd[1]: libpod-conmon-e45f74b8a1a993cbb537e1fb6c49ff1939775f243fb19b7e7b4029da37243e86.scope: Deactivated successfully. Nov 23 04:53:06 localhost openstack_network_exporter[241732]: ERROR 09:53:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:53:06 localhost openstack_network_exporter[241732]: ERROR 09:53:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:53:06 localhost openstack_network_exporter[241732]: ERROR 09:53:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:53:06 localhost openstack_network_exporter[241732]: ERROR 09:53:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:53:06 localhost openstack_network_exporter[241732]: Nov 23 04:53:06 localhost openstack_network_exporter[241732]: ERROR 09:53:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:53:06 localhost openstack_network_exporter[241732]: Nov 23 04:53:07 localhost systemd[1]: var-lib-containers-storage-overlay-a1e4c125885808f723b7e41636a07359322aea2bd29235acdb9c56d5c00f71b9-merged.mount: Deactivated successfully. Nov 23 04:53:07 localhost ceph-mon[293353]: mon.np0005532584@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:53:07 localhost nova_compute[280939]: 2025-11-23 09:53:07.143 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:53:07 localhost nova_compute[280939]: 2025-11-23 09:53:07.175 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:53:07 localhost nova_compute[280939]: 2025-11-23 09:53:07.176 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:53:07 localhost nova_compute[280939]: 2025-11-23 09:53:07.176 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:53:07 localhost nova_compute[280939]: 2025-11-23 09:53:07.177 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:53:07 localhost nova_compute[280939]: 2025-11-23 09:53:07.177 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:53:07 localhost podman[302123]: Nov 23 04:53:07 localhost podman[302123]: 2025-11-23 09:53:07.203869406 +0000 UTC m=+0.081798611 container create eb2956a9826470111fd198d4b805b2d16a7b135a11ab3f9c60ca09adadd7cc52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_wiles, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, release=553, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, distribution-scope=public, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55) Nov 23 04:53:07 localhost systemd[1]: Started libpod-conmon-eb2956a9826470111fd198d4b805b2d16a7b135a11ab3f9c60ca09adadd7cc52.scope. Nov 23 04:53:07 localhost podman[302123]: 2025-11-23 09:53:07.172747355 +0000 UTC m=+0.050676590 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:53:07 localhost systemd[1]: Started libcrun container. Nov 23 04:53:07 localhost podman[302123]: 2025-11-23 09:53:07.294431374 +0000 UTC m=+0.172360579 container init eb2956a9826470111fd198d4b805b2d16a7b135a11ab3f9c60ca09adadd7cc52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_wiles, architecture=x86_64, vcs-type=git, ceph=True, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, release=553, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, RELEASE=main, name=rhceph) Nov 23 04:53:07 localhost podman[302123]: 2025-11-23 09:53:07.306027378 +0000 UTC m=+0.183956593 container start eb2956a9826470111fd198d4b805b2d16a7b135a11ab3f9c60ca09adadd7cc52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_wiles, ceph=True, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, name=rhceph, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, RELEASE=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:53:07 localhost podman[302123]: 2025-11-23 09:53:07.30642164 +0000 UTC m=+0.184350895 container attach eb2956a9826470111fd198d4b805b2d16a7b135a11ab3f9c60ca09adadd7cc52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_wiles, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, ceph=True) Nov 23 04:53:07 localhost relaxed_wiles[302137]: 167 167 Nov 23 04:53:07 localhost systemd[1]: libpod-eb2956a9826470111fd198d4b805b2d16a7b135a11ab3f9c60ca09adadd7cc52.scope: Deactivated successfully. Nov 23 04:53:07 localhost podman[302123]: 2025-11-23 09:53:07.309515835 +0000 UTC m=+0.187445070 container died eb2956a9826470111fd198d4b805b2d16a7b135a11ab3f9c60ca09adadd7cc52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_wiles, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_CLEAN=True, io.openshift.expose-services=, distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git) Nov 23 04:53:07 localhost ceph-mon[293353]: Reconfiguring crash.np0005532584 (monmap changed)... Nov 23 04:53:07 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain Nov 23 04:53:07 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:07 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:07 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 23 04:53:07 localhost podman[302157]: 2025-11-23 09:53:07.411906104 +0000 UTC m=+0.088347121 container remove eb2956a9826470111fd198d4b805b2d16a7b135a11ab3f9c60ca09adadd7cc52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_wiles, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.tags=rhceph ceph, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.expose-services=, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 04:53:07 localhost systemd[1]: libpod-conmon-eb2956a9826470111fd198d4b805b2d16a7b135a11ab3f9c60ca09adadd7cc52.scope: Deactivated successfully. Nov 23 04:53:07 localhost ceph-mon[293353]: mon.np0005532584@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:53:07 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2726720879' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:53:07 localhost nova_compute[280939]: 2025-11-23 09:53:07.652 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:53:07 localhost nova_compute[280939]: 2025-11-23 09:53:07.875 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:53:07 localhost nova_compute[280939]: 2025-11-23 09:53:07.877 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12364MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:53:07 localhost nova_compute[280939]: 2025-11-23 09:53:07.877 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:53:07 localhost nova_compute[280939]: 2025-11-23 09:53:07.878 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:53:08 localhost nova_compute[280939]: 2025-11-23 09:53:08.017 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:53:08 localhost nova_compute[280939]: 2025-11-23 09:53:08.018 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:53:08 localhost nova_compute[280939]: 2025-11-23 09:53:08.079 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Refreshing inventories for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 23 04:53:08 localhost systemd[1]: tmp-crun.XWXGfg.mount: Deactivated successfully. Nov 23 04:53:08 localhost systemd[1]: var-lib-containers-storage-overlay-4f77dad0d21cddc2387209d382fbc6c2845db4bdef4d7f9013132a3300c69f99-merged.mount: Deactivated successfully. Nov 23 04:53:08 localhost nova_compute[280939]: 2025-11-23 09:53:08.171 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Updating ProviderTree inventory for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 23 04:53:08 localhost nova_compute[280939]: 2025-11-23 09:53:08.172 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Updating inventory in ProviderTree for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 23 04:53:08 localhost nova_compute[280939]: 2025-11-23 09:53:08.189 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Refreshing aggregate associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 23 04:53:08 localhost nova_compute[280939]: 2025-11-23 09:53:08.227 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Refreshing trait associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 23 04:53:08 localhost podman[302239]: Nov 23 04:53:08 localhost podman[302239]: 2025-11-23 09:53:08.253695221 +0000 UTC m=+0.082164443 container create 8502c16371445e5e47faa0c49f87d642f00cc1137ce4cc8218280d8acfe47211 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mclaren, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, version=7, architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, release=553, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git) Nov 23 04:53:08 localhost nova_compute[280939]: 2025-11-23 09:53:08.257 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:53:08 localhost systemd[1]: Started libpod-conmon-8502c16371445e5e47faa0c49f87d642f00cc1137ce4cc8218280d8acfe47211.scope. Nov 23 04:53:08 localhost systemd[1]: Started libcrun container. Nov 23 04:53:08 localhost podman[302239]: 2025-11-23 09:53:08.318835251 +0000 UTC m=+0.147304453 container init 8502c16371445e5e47faa0c49f87d642f00cc1137ce4cc8218280d8acfe47211 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mclaren, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., release=553, vcs-type=git, distribution-scope=public, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, name=rhceph, architecture=x86_64, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:53:08 localhost podman[302239]: 2025-11-23 09:53:08.221695952 +0000 UTC m=+0.050165224 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:53:08 localhost podman[302239]: 2025-11-23 09:53:08.327131735 +0000 UTC m=+0.155601007 container start 8502c16371445e5e47faa0c49f87d642f00cc1137ce4cc8218280d8acfe47211 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mclaren, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, version=7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, release=553, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, ceph=True) Nov 23 04:53:08 localhost podman[302239]: 2025-11-23 09:53:08.327832456 +0000 UTC m=+0.156301688 container attach 8502c16371445e5e47faa0c49f87d642f00cc1137ce4cc8218280d8acfe47211 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mclaren, distribution-scope=public, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , release=553, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 04:53:08 localhost vibrant_mclaren[302255]: 167 167 Nov 23 04:53:08 localhost systemd[1]: libpod-8502c16371445e5e47faa0c49f87d642f00cc1137ce4cc8218280d8acfe47211.scope: Deactivated successfully. Nov 23 04:53:08 localhost podman[302239]: 2025-11-23 09:53:08.331634522 +0000 UTC m=+0.160103754 container died 8502c16371445e5e47faa0c49f87d642f00cc1137ce4cc8218280d8acfe47211 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mclaren, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=553, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True) Nov 23 04:53:08 localhost ceph-mon[293353]: Reconfiguring osd.2 (monmap changed)... Nov 23 04:53:08 localhost ceph-mon[293353]: Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:53:08 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:08 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:08 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 23 04:53:08 localhost podman[302260]: 2025-11-23 09:53:08.443707117 +0000 UTC m=+0.096262842 container remove 8502c16371445e5e47faa0c49f87d642f00cc1137ce4cc8218280d8acfe47211 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mclaren, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, release=553, GIT_CLEAN=True, CEPH_POINT_RELEASE=, ceph=True, version=7, io.buildah.version=1.33.12) Nov 23 04:53:08 localhost systemd[1]: libpod-conmon-8502c16371445e5e47faa0c49f87d642f00cc1137ce4cc8218280d8acfe47211.scope: Deactivated successfully. Nov 23 04:53:08 localhost ceph-mon[293353]: mon.np0005532584@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:53:08 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3226986166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:53:08 localhost nova_compute[280939]: 2025-11-23 09:53:08.699 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:53:08 localhost nova_compute[280939]: 2025-11-23 09:53:08.707 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:53:08 localhost nova_compute[280939]: 2025-11-23 09:53:08.727 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:53:08 localhost nova_compute[280939]: 2025-11-23 09:53:08.731 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:53:08 localhost nova_compute[280939]: 2025-11-23 09:53:08.732 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:53:09 localhost systemd[1]: var-lib-containers-storage-overlay-3f235929ed2d34d7bb67cd4b7c996b5690053fe5d4fa3d427cc2a21268829f51-merged.mount: Deactivated successfully. Nov 23 04:53:09 localhost podman[302358]: Nov 23 04:53:09 localhost podman[302358]: 2025-11-23 09:53:09.34326398 +0000 UTC m=+0.078348485 container create a9905adcb7384ccecc44be26ea9ead2a7cba72c7a5e833c108b5151a19a80dc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_hugle, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, version=7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, name=rhceph, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph) Nov 23 04:53:09 localhost ceph-mon[293353]: Reconfiguring osd.5 (monmap changed)... Nov 23 04:53:09 localhost ceph-mon[293353]: Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:53:09 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:09 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:09 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:53:09 localhost podman[302358]: 2025-11-23 09:53:09.310377825 +0000 UTC m=+0.045462370 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:53:09 localhost systemd[1]: Started libpod-conmon-a9905adcb7384ccecc44be26ea9ead2a7cba72c7a5e833c108b5151a19a80dc5.scope. Nov 23 04:53:09 localhost systemd[1]: Started libcrun container. Nov 23 04:53:09 localhost podman[302358]: 2025-11-23 09:53:09.441952736 +0000 UTC m=+0.177037251 container init a9905adcb7384ccecc44be26ea9ead2a7cba72c7a5e833c108b5151a19a80dc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_hugle, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, name=rhceph, RELEASE=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.component=rhceph-container, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux , release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12) Nov 23 04:53:09 localhost podman[302358]: 2025-11-23 09:53:09.450808616 +0000 UTC m=+0.185893121 container start a9905adcb7384ccecc44be26ea9ead2a7cba72c7a5e833c108b5151a19a80dc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_hugle, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., version=7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, RELEASE=main, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux ) Nov 23 04:53:09 localhost podman[302358]: 2025-11-23 09:53:09.451197838 +0000 UTC m=+0.186282353 container attach a9905adcb7384ccecc44be26ea9ead2a7cba72c7a5e833c108b5151a19a80dc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_hugle, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, release=553, CEPH_POINT_RELEASE=, name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux , distribution-scope=public, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True) Nov 23 04:53:09 localhost jolly_hugle[302373]: 167 167 Nov 23 04:53:09 localhost systemd[1]: libpod-a9905adcb7384ccecc44be26ea9ead2a7cba72c7a5e833c108b5151a19a80dc5.scope: Deactivated successfully. Nov 23 04:53:09 localhost podman[302358]: 2025-11-23 09:53:09.453692925 +0000 UTC m=+0.188777470 container died a9905adcb7384ccecc44be26ea9ead2a7cba72c7a5e833c108b5151a19a80dc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_hugle, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, release=553, io.openshift.tags=rhceph ceph, name=rhceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 04:53:09 localhost podman[302378]: 2025-11-23 09:53:09.559174179 +0000 UTC m=+0.091807637 container remove a9905adcb7384ccecc44be26ea9ead2a7cba72c7a5e833c108b5151a19a80dc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_hugle, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, release=553, version=7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, name=rhceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git) Nov 23 04:53:09 localhost systemd[1]: libpod-conmon-a9905adcb7384ccecc44be26ea9ead2a7cba72c7a5e833c108b5151a19a80dc5.scope: Deactivated successfully. Nov 23 04:53:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:53:09.735 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:53:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:53:09.736 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:53:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:53:09.736 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:53:10 localhost systemd[1]: var-lib-containers-storage-overlay-d326378618c40e4652cae88fe2a1dd8ad99480f0a844c56dbe43c91a97ed5e7c-merged.mount: Deactivated successfully. Nov 23 04:53:10 localhost podman[302446]: Nov 23 04:53:10 localhost podman[302446]: 2025-11-23 09:53:10.345109477 +0000 UTC m=+0.085288047 container create bfa57c21a8611e76ffec289f09ebf7d43e86a6268cf3dc7773a5b122d87d8a10 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_driscoll, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, name=rhceph, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 23 04:53:10 localhost systemd[1]: Started libpod-conmon-bfa57c21a8611e76ffec289f09ebf7d43e86a6268cf3dc7773a5b122d87d8a10.scope. Nov 23 04:53:10 localhost systemd[1]: Started libcrun container. Nov 23 04:53:10 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)... Nov 23 04:53:10 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain Nov 23 04:53:10 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:10 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:10 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:10 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:10 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:10 localhost podman[302446]: 2025-11-23 09:53:10.309716476 +0000 UTC m=+0.049895066 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:53:10 localhost podman[302446]: 2025-11-23 09:53:10.418616064 +0000 UTC m=+0.158794624 container init bfa57c21a8611e76ffec289f09ebf7d43e86a6268cf3dc7773a5b122d87d8a10 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_driscoll, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, GIT_CLEAN=True, architecture=x86_64, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-type=git, name=rhceph) Nov 23 04:53:10 localhost podman[302446]: 2025-11-23 09:53:10.427809215 +0000 UTC m=+0.167987775 container start bfa57c21a8611e76ffec289f09ebf7d43e86a6268cf3dc7773a5b122d87d8a10 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_driscoll, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.buildah.version=1.33.12, io.openshift.expose-services=, CEPH_POINT_RELEASE=) Nov 23 04:53:10 localhost podman[302446]: 2025-11-23 09:53:10.428091464 +0000 UTC m=+0.168270034 container attach bfa57c21a8611e76ffec289f09ebf7d43e86a6268cf3dc7773a5b122d87d8a10 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_driscoll, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_CLEAN=True, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main) Nov 23 04:53:10 localhost kind_driscoll[302462]: 167 167 Nov 23 04:53:10 localhost systemd[1]: libpod-bfa57c21a8611e76ffec289f09ebf7d43e86a6268cf3dc7773a5b122d87d8a10.scope: Deactivated successfully. Nov 23 04:53:10 localhost podman[302446]: 2025-11-23 09:53:10.432486268 +0000 UTC m=+0.172664868 container died bfa57c21a8611e76ffec289f09ebf7d43e86a6268cf3dc7773a5b122d87d8a10 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_driscoll, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, architecture=x86_64, name=rhceph, GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, release=553, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, distribution-scope=public, version=7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:53:10 localhost podman[302467]: 2025-11-23 09:53:10.91483485 +0000 UTC m=+0.472329417 container remove bfa57c21a8611e76ffec289f09ebf7d43e86a6268cf3dc7773a5b122d87d8a10 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_driscoll, distribution-scope=public, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, name=rhceph, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553) Nov 23 04:53:10 localhost systemd[1]: libpod-conmon-bfa57c21a8611e76ffec289f09ebf7d43e86a6268cf3dc7773a5b122d87d8a10.scope: Deactivated successfully. Nov 23 04:53:11 localhost systemd[1]: var-lib-containers-storage-overlay-a0db1d809b7b72db72039836fa4e68c40a88c871d4378a6cd902e12c07be50fc-merged.mount: Deactivated successfully. Nov 23 04:53:11 localhost nova_compute[280939]: 2025-11-23 09:53:11.408 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:53:11 localhost nova_compute[280939]: 2025-11-23 09:53:11.410 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:53:11 localhost ceph-mon[293353]: Added label _no_schedule to host np0005532583.localdomain Nov 23 04:53:11 localhost ceph-mon[293353]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005532583.localdomain Nov 23 04:53:11 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)... Nov 23 04:53:11 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain Nov 23 04:53:11 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:11 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:11 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:53:11 localhost podman[302537]: Nov 23 04:53:11 localhost podman[302537]: 2025-11-23 09:53:11.672587857 +0000 UTC m=+0.083011728 container create e6c692983c53d07c501f3ca845c34042c98d77799ac696e2c56a3e8cef00ef4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_ptolemy, com.redhat.component=rhceph-container, name=rhceph, maintainer=Guillaume Abrioux , ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, version=7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, distribution-scope=public) Nov 23 04:53:11 localhost systemd[1]: Started libpod-conmon-e6c692983c53d07c501f3ca845c34042c98d77799ac696e2c56a3e8cef00ef4b.scope. Nov 23 04:53:11 localhost systemd[1]: Started libcrun container. Nov 23 04:53:11 localhost podman[302537]: 2025-11-23 09:53:11.639767444 +0000 UTC m=+0.050191355 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:53:11 localhost podman[302537]: 2025-11-23 09:53:11.740893365 +0000 UTC m=+0.151317236 container init e6c692983c53d07c501f3ca845c34042c98d77799ac696e2c56a3e8cef00ef4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_ptolemy, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=) Nov 23 04:53:11 localhost podman[302537]: 2025-11-23 09:53:11.751501629 +0000 UTC m=+0.161925470 container start e6c692983c53d07c501f3ca845c34042c98d77799ac696e2c56a3e8cef00ef4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_ptolemy, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, name=rhceph, distribution-scope=public, ceph=True, architecture=x86_64, version=7, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 04:53:11 localhost podman[302537]: 2025-11-23 09:53:11.751694575 +0000 UTC m=+0.162118486 container attach e6c692983c53d07c501f3ca845c34042c98d77799ac696e2c56a3e8cef00ef4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_ptolemy, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, ceph=True, distribution-scope=public, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, release=553, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container) Nov 23 04:53:11 localhost quizzical_ptolemy[302551]: 167 167 Nov 23 04:53:11 localhost systemd[1]: libpod-e6c692983c53d07c501f3ca845c34042c98d77799ac696e2c56a3e8cef00ef4b.scope: Deactivated successfully. Nov 23 04:53:11 localhost podman[302537]: 2025-11-23 09:53:11.758274487 +0000 UTC m=+0.168698388 container died e6c692983c53d07c501f3ca845c34042c98d77799ac696e2c56a3e8cef00ef4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_ptolemy, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, version=7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, build-date=2025-09-24T08:57:55, distribution-scope=public, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_CLEAN=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=) Nov 23 04:53:11 localhost podman[302556]: 2025-11-23 09:53:11.846856743 +0000 UTC m=+0.077913302 container remove e6c692983c53d07c501f3ca845c34042c98d77799ac696e2c56a3e8cef00ef4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_ptolemy, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=553, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_CLEAN=True, RELEASE=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux ) Nov 23 04:53:11 localhost systemd[1]: libpod-conmon-e6c692983c53d07c501f3ca845c34042c98d77799ac696e2c56a3e8cef00ef4b.scope: Deactivated successfully. Nov 23 04:53:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:53:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:53:12 localhost systemd[1]: tmp-crun.g9Xa3u.mount: Deactivated successfully. Nov 23 04:53:12 localhost systemd[1]: var-lib-containers-storage-overlay-3b44036bc01468d3f3decb3dabadbb96e7eee4704cd7e2a53a4f1d6035b94622-merged.mount: Deactivated successfully. Nov 23 04:53:12 localhost ceph-mon[293353]: mon.np0005532584@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:53:12 localhost systemd[1]: tmp-crun.b0oqMT.mount: Deactivated successfully. Nov 23 04:53:12 localhost podman[302574]: 2025-11-23 09:53:12.177451947 +0000 UTC m=+0.095678355 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:53:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:53:12 localhost podman[302573]: 2025-11-23 09:53:12.225446353 +0000 UTC m=+0.143905318 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:53:12 localhost podman[302574]: 2025-11-23 09:53:12.2436451 +0000 UTC m=+0.161871498 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:53:12 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:53:12 localhost podman[302573]: 2025-11-23 09:53:12.269522661 +0000 UTC m=+0.187981596 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 04:53:12 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:53:12 localhost podman[302608]: 2025-11-23 09:53:12.344293506 +0000 UTC m=+0.123872076 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Nov 23 04:53:12 localhost podman[302608]: 2025-11-23 09:53:12.388448936 +0000 UTC m=+0.168027556 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true) Nov 23 04:53:12 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:53:12 localhost ceph-mon[293353]: Reconfiguring mon.np0005532584 (monmap changed)... Nov 23 04:53:12 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain Nov 23 04:53:12 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:12 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:12 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:12 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:12 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain"} : dispatch Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.585 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.585 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.585 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.585 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.586 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.586 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.586 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:53:12.586 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:53:13 localhost ceph-mon[293353]: Reconfiguring crash.np0005532585 (monmap changed)... Nov 23 04:53:13 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain Nov 23 04:53:13 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain"}]': finished Nov 23 04:53:13 localhost ceph-mon[293353]: Removed host np0005532583.localdomain Nov 23 04:53:13 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:13 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:13 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 23 04:53:14 localhost ceph-mon[293353]: Reconfiguring osd.0 (monmap changed)... Nov 23 04:53:14 localhost ceph-mon[293353]: Reconfiguring daemon osd.0 on np0005532585.localdomain Nov 23 04:53:14 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:14 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:14 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 23 04:53:14 localhost ceph-mon[293353]: Reconfiguring osd.3 (monmap changed)... Nov 23 04:53:14 localhost ceph-mon[293353]: Reconfiguring daemon osd.3 on np0005532585.localdomain Nov 23 04:53:15 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:15 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:15 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:53:15 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)... Nov 23 04:53:15 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain Nov 23 04:53:15 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:15 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:15 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:16 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)... Nov 23 04:53:16 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain Nov 23 04:53:16 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:16 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:16 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:53:17 localhost podman[239764]: time="2025-11-23T09:53:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:53:17 localhost podman[239764]: @ - - [23/Nov/2025:09:53:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:53:17 localhost ceph-mon[293353]: mon.np0005532584@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:53:17 localhost podman[239764]: @ - - [23/Nov/2025:09:53:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18221 "" "Go-http-client/1.1" Nov 23 04:53:17 localhost ceph-mon[293353]: Reconfiguring mon.np0005532585 (monmap changed)... Nov 23 04:53:17 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532585 on np0005532585.localdomain Nov 23 04:53:17 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:17 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:17 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:18 localhost ceph-mon[293353]: Reconfiguring crash.np0005532586 (monmap changed)... Nov 23 04:53:18 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain Nov 23 04:53:18 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:18 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:18 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 23 04:53:19 localhost ceph-mon[293353]: Reconfiguring osd.1 (monmap changed)... Nov 23 04:53:19 localhost ceph-mon[293353]: Reconfiguring daemon osd.1 on np0005532586.localdomain Nov 23 04:53:19 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:19 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:19 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 23 04:53:21 localhost ceph-mon[293353]: Reconfiguring osd.4 (monmap changed)... Nov 23 04:53:21 localhost ceph-mon[293353]: Reconfiguring daemon osd.4 on np0005532586.localdomain Nov 23 04:53:21 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:21 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:21 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:53:21 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:21 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:21 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:21 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:22 localhost ceph-mon[293353]: mon.np0005532584@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:53:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:53:22 localhost podman[302656]: 2025-11-23 09:53:22.384924249 +0000 UTC m=+0.108698333 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 04:53:22 localhost podman[302656]: 2025-11-23 09:53:22.397464672 +0000 UTC m=+0.121238766 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, build-date=2025-08-20T13:12:41) Nov 23 04:53:22 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:53:22 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)... Nov 23 04:53:22 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain Nov 23 04:53:22 localhost ceph-mon[293353]: Saving service mon spec with placement label:mon Nov 23 04:53:22 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532586.thmvqb (monmap changed)... Nov 23 04:53:22 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain Nov 23 04:53:22 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:22 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:23 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:53:23 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:23 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55fbaf007600 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Nov 23 04:53:23 localhost ceph-mon[293353]: mon.np0005532584@1(peon) e16 my rank is now 0 (was 1) Nov 23 04:53:23 localhost ceph-mgr[286671]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Nov 23 04:53:23 localhost ceph-mgr[286671]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Nov 23 04:53:23 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55fbaf007080 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Nov 23 04:53:24 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : mon.np0005532584 calling monitor election Nov 23 04:53:24 localhost ceph-mon[293353]: paxos.0).electionLogic(60) init, last seen epoch 60 Nov 23 04:53:24 localhost ceph-mon[293353]: mon.np0005532584@0(electing) e16 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:53:24 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : mon.np0005532584 is new leader, mons np0005532584,np0005532585 in quorum (ranks 0,1) Nov 23 04:53:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : monmap epoch 16 Nov 23 04:53:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : fsid 46550e70-79cb-5f55-bf6d-1204b97e083b Nov 23 04:53:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : last_changed 2025-11-23T09:53:23.789795+0000 Nov 23 04:53:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : created 2025-11-23T07:39:05.590972+0000 Nov 23 04:53:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Nov 23 04:53:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : election_strategy: 1 Nov 23 04:53:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005532584 Nov 23 04:53:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005532585 Nov 23 04:53:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:53:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005532586.mfohsb=up:active} 2 up:standby Nov 23 04:53:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e85: 6 total, 6 up, 6 in Nov 23 04:53:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e33: np0005532585.gzafiw(active, since 71s), standbys: np0005532586.thmvqb, np0005532583.orhywt, np0005532584.naxwxy Nov 23 04:53:24 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : overall HEALTH_OK Nov 23 04:53:24 localhost ceph-mon[293353]: Remove daemons mon.np0005532586 Nov 23 04:53:24 localhost ceph-mon[293353]: Safe to remove mon.np0005532586: new quorum should be ['np0005532584', 'np0005532585'] (from ['np0005532584', 'np0005532585']) Nov 23 04:53:24 localhost ceph-mon[293353]: Removing monitor np0005532586 from monmap... Nov 23 04:53:24 localhost ceph-mon[293353]: Removing daemon mon.np0005532586 from np0005532586.localdomain -- ports [] Nov 23 04:53:24 localhost ceph-mon[293353]: mon.np0005532584 calling monitor election Nov 23 04:53:24 localhost ceph-mon[293353]: mon.np0005532585 calling monitor election Nov 23 04:53:24 localhost ceph-mon[293353]: mon.np0005532584 is new leader, mons np0005532584,np0005532585 in quorum (ranks 0,1) Nov 23 04:53:24 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:53:24 localhost ceph-mon[293353]: overall HEALTH_OK Nov 23 04:53:25 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:53:25 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:53:25 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:53:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:53:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:53:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:53:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:53:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 04:53:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:53:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:53:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 04:53:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Nov 23 04:53:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:53:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:53:26 localhost podman[303098]: 2025-11-23 09:53:26.259505922 +0000 UTC m=+0.092199499 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:53:26 localhost podman[303098]: 2025-11-23 09:53:26.298548185 +0000 UTC m=+0.131241702 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:53:26 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:53:26 localhost podman[303099]: 2025-11-23 09:53:26.316951478 +0000 UTC m=+0.148324004 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:53:26 localhost podman[303099]: 2025-11-23 09:53:26.324645252 +0000 UTC m=+0.156017848 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:53:26 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:53:26 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:53:26 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:53:26 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:53:26 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:26 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:26 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:26 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:26 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:26 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:26 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:26 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:26 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:26 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:26 localhost podman[303171]: Nov 23 04:53:26 localhost podman[303171]: 2025-11-23 09:53:26.692689331 +0000 UTC m=+0.079627315 container create 78ead0c4d7825a2f711f800b30321ece7e6c355306afebd055e19e66f3d2dd07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kare, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, CEPH_POINT_RELEASE=, io.openshift.expose-services=, name=rhceph, version=7, ceph=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:53:26 localhost systemd[1]: Started libpod-conmon-78ead0c4d7825a2f711f800b30321ece7e6c355306afebd055e19e66f3d2dd07.scope. Nov 23 04:53:26 localhost systemd[1]: Started libcrun container. Nov 23 04:53:26 localhost podman[303171]: 2025-11-23 09:53:26.661026753 +0000 UTC m=+0.047964737 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:53:26 localhost podman[303171]: 2025-11-23 09:53:26.761409051 +0000 UTC m=+0.148347025 container init 78ead0c4d7825a2f711f800b30321ece7e6c355306afebd055e19e66f3d2dd07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kare, RELEASE=main, build-date=2025-09-24T08:57:55, version=7, io.buildah.version=1.33.12, GIT_BRANCH=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, CEPH_POINT_RELEASE=) Nov 23 04:53:26 localhost podman[303171]: 2025-11-23 09:53:26.77085845 +0000 UTC m=+0.157796424 container start 78ead0c4d7825a2f711f800b30321ece7e6c355306afebd055e19e66f3d2dd07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kare, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-type=git, name=rhceph, io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:53:26 localhost podman[303171]: 2025-11-23 09:53:26.771307484 +0000 UTC m=+0.158245498 container attach 78ead0c4d7825a2f711f800b30321ece7e6c355306afebd055e19e66f3d2dd07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kare, vendor=Red Hat, Inc., GIT_CLEAN=True, release=553, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, architecture=x86_64, com.redhat.component=rhceph-container, name=rhceph, distribution-scope=public, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main) Nov 23 04:53:26 localhost systemd[1]: libpod-78ead0c4d7825a2f711f800b30321ece7e6c355306afebd055e19e66f3d2dd07.scope: Deactivated successfully. Nov 23 04:53:26 localhost kind_kare[303186]: 167 167 Nov 23 04:53:26 localhost podman[303171]: 2025-11-23 09:53:26.774901483 +0000 UTC m=+0.161839507 container died 78ead0c4d7825a2f711f800b30321ece7e6c355306afebd055e19e66f3d2dd07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kare, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=553, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, name=rhceph, RELEASE=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=) Nov 23 04:53:26 localhost podman[303191]: 2025-11-23 09:53:26.874775506 +0000 UTC m=+0.088077024 container remove 78ead0c4d7825a2f711f800b30321ece7e6c355306afebd055e19e66f3d2dd07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kare, version=7, ceph=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_BRANCH=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.openshift.expose-services=, RELEASE=main, release=553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc.) Nov 23 04:53:26 localhost systemd[1]: libpod-conmon-78ead0c4d7825a2f711f800b30321ece7e6c355306afebd055e19e66f3d2dd07.scope: Deactivated successfully. Nov 23 04:53:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:53:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:53:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:53:27 localhost systemd[1]: var-lib-containers-storage-overlay-16548edeea66b3da2dd92aabde8857f1d5c4aeba0bd35b981db07b1b3c82de6b-merged.mount: Deactivated successfully. Nov 23 04:53:27 localhost ceph-mon[293353]: Reconfiguring crash.np0005532584 (monmap changed)... Nov 23 04:53:27 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain Nov 23 04:53:27 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:27 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:27 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 23 04:53:27 localhost podman[303261]: Nov 23 04:53:27 localhost podman[303261]: 2025-11-23 09:53:27.6216257 +0000 UTC m=+0.076315303 container create e3a7891db5e3f51b57d42447aa0fb7fbe92a31657192483d69d41caac04722ab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_visvesvaraya, release=553, name=rhceph, architecture=x86_64, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=, RELEASE=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:53:27 localhost systemd[1]: Started libpod-conmon-e3a7891db5e3f51b57d42447aa0fb7fbe92a31657192483d69d41caac04722ab.scope. Nov 23 04:53:27 localhost systemd[1]: Started libcrun container. Nov 23 04:53:27 localhost podman[303261]: 2025-11-23 09:53:27.589379355 +0000 UTC m=+0.044068988 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:53:27 localhost podman[303261]: 2025-11-23 09:53:27.686432761 +0000 UTC m=+0.141122354 container init e3a7891db5e3f51b57d42447aa0fb7fbe92a31657192483d69d41caac04722ab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_visvesvaraya, distribution-scope=public, CEPH_POINT_RELEASE=, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, maintainer=Guillaume Abrioux , ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7) Nov 23 04:53:27 localhost podman[303261]: 2025-11-23 09:53:27.699660385 +0000 UTC m=+0.154349978 container start e3a7891db5e3f51b57d42447aa0fb7fbe92a31657192483d69d41caac04722ab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_visvesvaraya, io.openshift.expose-services=, GIT_CLEAN=True, RELEASE=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64) Nov 23 04:53:27 localhost podman[303261]: 2025-11-23 09:53:27.699913093 +0000 UTC m=+0.154602696 container attach e3a7891db5e3f51b57d42447aa0fb7fbe92a31657192483d69d41caac04722ab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_visvesvaraya, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, architecture=x86_64, distribution-scope=public, RELEASE=main, name=rhceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, version=7, vendor=Red Hat, Inc., GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, vcs-type=git) Nov 23 04:53:27 localhost kind_visvesvaraya[303276]: 167 167 Nov 23 04:53:27 localhost systemd[1]: libpod-e3a7891db5e3f51b57d42447aa0fb7fbe92a31657192483d69d41caac04722ab.scope: Deactivated successfully. Nov 23 04:53:27 localhost podman[303261]: 2025-11-23 09:53:27.702291956 +0000 UTC m=+0.156981579 container died e3a7891db5e3f51b57d42447aa0fb7fbe92a31657192483d69d41caac04722ab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_visvesvaraya, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, ceph=True, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.) Nov 23 04:53:27 localhost podman[303281]: 2025-11-23 09:53:27.801612721 +0000 UTC m=+0.086600308 container remove e3a7891db5e3f51b57d42447aa0fb7fbe92a31657192483d69d41caac04722ab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_visvesvaraya, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, version=7, io.buildah.version=1.33.12, GIT_BRANCH=main, name=rhceph, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 04:53:27 localhost systemd[1]: libpod-conmon-e3a7891db5e3f51b57d42447aa0fb7fbe92a31657192483d69d41caac04722ab.scope: Deactivated successfully. Nov 23 04:53:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:53:27 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:53:28 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:28 localhost systemd[1]: var-lib-containers-storage-overlay-dd334f992212e34ee5069caddd7d9da674195e4fbd13393a567617f98d61d87f-merged.mount: Deactivated successfully. Nov 23 04:53:28 localhost ceph-mon[293353]: Reconfiguring osd.2 (monmap changed)... Nov 23 04:53:28 localhost ceph-mon[293353]: Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:53:28 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:28 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:28 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 23 04:53:28 localhost podman[303358]: Nov 23 04:53:28 localhost podman[303358]: 2025-11-23 09:53:28.600359722 +0000 UTC m=+0.077870321 container create d16c8132d5e47ac0ecb85eda1f2941ad25c6be696cf412318e21f54a3eae6690 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_chatterjee, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, release=553, name=rhceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, vcs-type=git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.33.12) Nov 23 04:53:28 localhost systemd[1]: Started libpod-conmon-d16c8132d5e47ac0ecb85eda1f2941ad25c6be696cf412318e21f54a3eae6690.scope. Nov 23 04:53:28 localhost systemd[1]: Started libcrun container. Nov 23 04:53:28 localhost podman[303358]: 2025-11-23 09:53:28.659244462 +0000 UTC m=+0.136755061 container init d16c8132d5e47ac0ecb85eda1f2941ad25c6be696cf412318e21f54a3eae6690 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_chatterjee, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-type=git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , ceph=True, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, architecture=x86_64, name=rhceph, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:53:28 localhost podman[303358]: 2025-11-23 09:53:28.569827279 +0000 UTC m=+0.047337888 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:53:28 localhost systemd[1]: tmp-crun.rJqHSJ.mount: Deactivated successfully. Nov 23 04:53:28 localhost podman[303358]: 2025-11-23 09:53:28.672900649 +0000 UTC m=+0.150411238 container start d16c8132d5e47ac0ecb85eda1f2941ad25c6be696cf412318e21f54a3eae6690 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_chatterjee, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=553, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, ceph=True, architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, maintainer=Guillaume Abrioux ) Nov 23 04:53:28 localhost podman[303358]: 2025-11-23 09:53:28.673199558 +0000 UTC m=+0.150710197 container attach d16c8132d5e47ac0ecb85eda1f2941ad25c6be696cf412318e21f54a3eae6690 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_chatterjee, ceph=True, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux ) Nov 23 04:53:28 localhost quirky_chatterjee[303373]: 167 167 Nov 23 04:53:28 localhost systemd[1]: libpod-d16c8132d5e47ac0ecb85eda1f2941ad25c6be696cf412318e21f54a3eae6690.scope: Deactivated successfully. Nov 23 04:53:28 localhost podman[303358]: 2025-11-23 09:53:28.676327084 +0000 UTC m=+0.153837673 container died d16c8132d5e47ac0ecb85eda1f2941ad25c6be696cf412318e21f54a3eae6690 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_chatterjee, RELEASE=main, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_CLEAN=True, version=7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7) Nov 23 04:53:28 localhost podman[303378]: 2025-11-23 09:53:28.746122167 +0000 UTC m=+0.062068818 container remove d16c8132d5e47ac0ecb85eda1f2941ad25c6be696cf412318e21f54a3eae6690 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_chatterjee, GIT_CLEAN=True, name=rhceph, io.openshift.tags=rhceph ceph, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, build-date=2025-09-24T08:57:55, architecture=x86_64, version=7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.openshift.expose-services=, RELEASE=main, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:53:28 localhost systemd[1]: libpod-conmon-d16c8132d5e47ac0ecb85eda1f2941ad25c6be696cf412318e21f54a3eae6690.scope: Deactivated successfully. Nov 23 04:53:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:53:28 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:53:28 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Nov 23 04:53:28 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:53:29 localhost systemd[1]: var-lib-containers-storage-overlay-39eb5bdd64cb22acef04f11284fcfb85375eddab098ad3696cc64478b20e8fcd-merged.mount: Deactivated successfully. Nov 23 04:53:29 localhost ceph-mon[293353]: Reconfiguring osd.5 (monmap changed)... Nov 23 04:53:29 localhost ceph-mon[293353]: Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:53:29 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:29 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:29 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:53:29 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)... Nov 23 04:53:29 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:53:29 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain Nov 23 04:53:29 localhost podman[303454]: Nov 23 04:53:29 localhost podman[303454]: 2025-11-23 09:53:29.495678754 +0000 UTC m=+0.059379185 container create a1d5c734ff46489b7c3534ee53b8e724e4caf7bfb73601e6d48eb30ea6eaadde (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_noether, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, build-date=2025-09-24T08:57:55, ceph=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, architecture=x86_64, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:53:29 localhost systemd[1]: Started libpod-conmon-a1d5c734ff46489b7c3534ee53b8e724e4caf7bfb73601e6d48eb30ea6eaadde.scope. Nov 23 04:53:29 localhost systemd[1]: Started libcrun container. Nov 23 04:53:29 localhost podman[303454]: 2025-11-23 09:53:29.468737251 +0000 UTC m=+0.032437652 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:53:29 localhost podman[303454]: 2025-11-23 09:53:29.578505156 +0000 UTC m=+0.142205577 container init a1d5c734ff46489b7c3534ee53b8e724e4caf7bfb73601e6d48eb30ea6eaadde (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_noether, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_BRANCH=main, ceph=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, version=7, vcs-type=git, com.redhat.component=rhceph-container) Nov 23 04:53:29 localhost angry_noether[303469]: 167 167 Nov 23 04:53:29 localhost podman[303454]: 2025-11-23 09:53:29.587552942 +0000 UTC m=+0.151253373 container start a1d5c734ff46489b7c3534ee53b8e724e4caf7bfb73601e6d48eb30ea6eaadde (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_noether, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_BRANCH=main, release=553, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, name=rhceph) Nov 23 04:53:29 localhost podman[303454]: 2025-11-23 09:53:29.588027207 +0000 UTC m=+0.151727628 container attach a1d5c734ff46489b7c3534ee53b8e724e4caf7bfb73601e6d48eb30ea6eaadde (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_noether, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.component=rhceph-container, distribution-scope=public, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, ceph=True, RELEASE=main, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 04:53:29 localhost systemd[1]: libpod-a1d5c734ff46489b7c3534ee53b8e724e4caf7bfb73601e6d48eb30ea6eaadde.scope: Deactivated successfully. Nov 23 04:53:29 localhost podman[303454]: 2025-11-23 09:53:29.590364138 +0000 UTC m=+0.154064589 container died a1d5c734ff46489b7c3534ee53b8e724e4caf7bfb73601e6d48eb30ea6eaadde (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_noether, distribution-scope=public, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, architecture=x86_64, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, vcs-type=git, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True) Nov 23 04:53:29 localhost podman[303474]: 2025-11-23 09:53:29.668210067 +0000 UTC m=+0.068684681 container remove a1d5c734ff46489b7c3534ee53b8e724e4caf7bfb73601e6d48eb30ea6eaadde (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_noether, com.redhat.component=rhceph-container, vcs-type=git, GIT_CLEAN=True, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , RELEASE=main, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, release=553, ceph=True, io.openshift.tags=rhceph ceph, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64) Nov 23 04:53:29 localhost systemd[1]: libpod-conmon-a1d5c734ff46489b7c3534ee53b8e724e4caf7bfb73601e6d48eb30ea6eaadde.scope: Deactivated successfully. Nov 23 04:53:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:53:29 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:53:29 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Nov 23 04:53:29 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:30 localhost systemd[1]: var-lib-containers-storage-overlay-a5b5f61d7212e5169dfeaae8b92cfa5ec38402d7a5bad2d0f5c47227ec8afdf8-merged.mount: Deactivated successfully. Nov 23 04:53:30 localhost podman[303544]: Nov 23 04:53:30 localhost podman[303544]: 2025-11-23 09:53:30.346006022 +0000 UTC m=+0.072717023 container create 1a0959af6e16f13c7d32f46f970456f036fe2f7f7d0874a19061fce44c041016 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_proskuriakova, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, vendor=Red Hat, Inc., release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7) Nov 23 04:53:30 localhost systemd[1]: Started libpod-conmon-1a0959af6e16f13c7d32f46f970456f036fe2f7f7d0874a19061fce44c041016.scope. Nov 23 04:53:30 localhost systemd[1]: Started libcrun container. Nov 23 04:53:30 localhost podman[303544]: 2025-11-23 09:53:30.408206793 +0000 UTC m=+0.134917784 container init 1a0959af6e16f13c7d32f46f970456f036fe2f7f7d0874a19061fce44c041016 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_proskuriakova, com.redhat.component=rhceph-container, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , version=7, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:53:30 localhost podman[303544]: 2025-11-23 09:53:30.316421237 +0000 UTC m=+0.043132258 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:53:30 localhost podman[303544]: 2025-11-23 09:53:30.418220679 +0000 UTC m=+0.144931670 container start 1a0959af6e16f13c7d32f46f970456f036fe2f7f7d0874a19061fce44c041016 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_proskuriakova, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_CLEAN=True, version=7, architecture=x86_64, name=rhceph, maintainer=Guillaume Abrioux , release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, vcs-type=git, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Nov 23 04:53:30 localhost podman[303544]: 2025-11-23 09:53:30.418458277 +0000 UTC m=+0.145169288 container attach 1a0959af6e16f13c7d32f46f970456f036fe2f7f7d0874a19061fce44c041016 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_proskuriakova, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, version=7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.openshift.tags=rhceph ceph, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:53:30 localhost relaxed_proskuriakova[303560]: 167 167 Nov 23 04:53:30 localhost systemd[1]: libpod-1a0959af6e16f13c7d32f46f970456f036fe2f7f7d0874a19061fce44c041016.scope: Deactivated successfully. Nov 23 04:53:30 localhost podman[303544]: 2025-11-23 09:53:30.420128377 +0000 UTC m=+0.146839398 container died 1a0959af6e16f13c7d32f46f970456f036fe2f7f7d0874a19061fce44c041016 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_proskuriakova, io.openshift.expose-services=, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_CLEAN=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git) Nov 23 04:53:30 localhost podman[303565]: 2025-11-23 09:53:30.507034853 +0000 UTC m=+0.077506099 container remove 1a0959af6e16f13c7d32f46f970456f036fe2f7f7d0874a19061fce44c041016 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_proskuriakova, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, distribution-scope=public, vcs-type=git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, version=7, ceph=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_BRANCH=main, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 23 04:53:30 localhost systemd[1]: libpod-conmon-1a0959af6e16f13c7d32f46f970456f036fe2f7f7d0874a19061fce44c041016.scope: Deactivated successfully. Nov 23 04:53:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:53:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:53:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Nov 23 04:53:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 04:53:30 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:30 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:30 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:30 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)... Nov 23 04:53:30 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:30 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain Nov 23 04:53:30 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:30 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:30 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:30 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:31 localhost systemd[1]: var-lib-containers-storage-overlay-6e4c0725ac1e25a983529a65644112056ebba03d2254291fd5100a36af9ea3ba-merged.mount: Deactivated successfully. Nov 23 04:53:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:53:31 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:53:31 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:31 localhost ceph-mon[293353]: Reconfiguring crash.np0005532585 (monmap changed)... Nov 23 04:53:31 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain Nov 23 04:53:31 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:31 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:31 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:31 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 23 04:53:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:53:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:53:32 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:53:32 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:33 localhost ceph-mon[293353]: Reconfiguring osd.0 (monmap changed)... Nov 23 04:53:33 localhost ceph-mon[293353]: Reconfiguring daemon osd.0 on np0005532585.localdomain Nov 23 04:53:33 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:33 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:33 localhost ceph-mon[293353]: Reconfiguring osd.3 (monmap changed)... Nov 23 04:53:33 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 23 04:53:33 localhost ceph-mon[293353]: Reconfiguring daemon osd.3 on np0005532585.localdomain Nov 23 04:53:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:53:33 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:53:33 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Nov 23 04:53:33 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:53:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:53:34 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:53:34 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Nov 23 04:53:34 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:34 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:34 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:34 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:53:34 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:53:34 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:34 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:34 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:34 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:53:35 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:53:35 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Nov 23 04:53:35 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:35 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)... Nov 23 04:53:35 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain Nov 23 04:53:35 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)... Nov 23 04:53:35 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain Nov 23 04:53:35 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:35 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:35 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:35 localhost ceph-mon[293353]: Reconfiguring crash.np0005532586 (monmap changed)... Nov 23 04:53:35 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:35 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain Nov 23 04:53:36 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:53:36 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:36 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:53:36 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:36 localhost openstack_network_exporter[241732]: ERROR 09:53:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:53:36 localhost openstack_network_exporter[241732]: ERROR 09:53:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:53:36 localhost openstack_network_exporter[241732]: ERROR 09:53:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:53:36 localhost openstack_network_exporter[241732]: Nov 23 04:53:36 localhost openstack_network_exporter[241732]: ERROR 09:53:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:53:36 localhost openstack_network_exporter[241732]: ERROR 09:53:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:53:36 localhost openstack_network_exporter[241732]: Nov 23 04:53:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:53:36 localhost podman[303582]: 2025-11-23 09:53:36.901016236 +0000 UTC m=+0.084323268 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true) Nov 23 04:53:36 localhost podman[303582]: 2025-11-23 09:53:36.936314704 +0000 UTC m=+0.119621706 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251118) Nov 23 04:53:36 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:53:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Nov 23 04:53:37 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:53:37 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:53:37 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:37 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:37 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:37 localhost ceph-mon[293353]: Reconfiguring osd.1 (monmap changed)... Nov 23 04:53:37 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 23 04:53:37 localhost ceph-mon[293353]: Reconfiguring daemon osd.1 on np0005532586.localdomain Nov 23 04:53:37 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:37 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:53:37 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:37 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:37 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 23 04:53:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:53:38 localhost ceph-mon[293353]: Deploying daemon mon.np0005532586 on np0005532586.localdomain Nov 23 04:53:38 localhost ceph-mon[293353]: Reconfiguring osd.4 (monmap changed)... Nov 23 04:53:38 localhost ceph-mon[293353]: Reconfiguring daemon osd.4 on np0005532586.localdomain Nov 23 04:53:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:53:39 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:53:39 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Nov 23 04:53:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Nov 23 04:53:40 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:40 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:53:40 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:53:40 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Nov 23 04:53:40 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:53:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e16 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Nov 23 04:53:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader).monmap v16 adding/updating np0005532586 at [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to monitor cluster Nov 23 04:53:40 localhost ceph-mgr[286671]: ms_deliver_dispatch: unhandled message 0x55fbaf006f20 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Nov 23 04:53:40 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : mon.np0005532584 calling monitor election Nov 23 04:53:40 localhost ceph-mon[293353]: paxos.0).electionLogic(62) init, last seen epoch 62 Nov 23 04:53:40 localhost ceph-mon[293353]: mon.np0005532584@0(electing) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:53:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:53:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:53:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:53:42 localhost systemd[1]: tmp-crun.52GNvh.mount: Deactivated successfully. Nov 23 04:53:42 localhost podman[303600]: 2025-11-23 09:53:42.911485245 +0000 UTC m=+0.096427308 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:53:42 localhost podman[303600]: 2025-11-23 09:53:42.950508147 +0000 UTC m=+0.135450220 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:53:42 localhost podman[303599]: 2025-11-23 09:53:42.962253306 +0000 UTC m=+0.150163240 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 04:53:42 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:53:42 localhost podman[303599]: 2025-11-23 09:53:42.996234314 +0000 UTC m=+0.184144208 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Nov 23 04:53:43 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:53:43 localhost podman[303601]: 2025-11-23 09:53:43.009394217 +0000 UTC m=+0.189346637 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Nov 23 04:53:43 localhost podman[303601]: 2025-11-23 09:53:43.049405139 +0000 UTC m=+0.229357579 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 04:53:43 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:53:45 localhost ceph-mon[293353]: paxos.0).electionLogic(63) init, last seen epoch 63, mid-election, bumping Nov 23 04:53:45 localhost ceph-mon[293353]: mon.np0005532584@0(electing) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : mon.np0005532584 is new leader, mons np0005532584,np0005532585,np0005532586 in quorum (ranks 0,1,2) Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : monmap epoch 17 Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : fsid 46550e70-79cb-5f55-bf6d-1204b97e083b Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : last_changed 2025-11-23T09:53:40.507961+0000 Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : created 2025-11-23T07:39:05.590972+0000 Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : election_strategy: 1 Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005532584 Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005532585 Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005532586 Nov 23 04:53:45 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005532586.mfohsb=up:active} 2 up:standby Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e85: 6 total, 6 up, 6 in Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e33: np0005532585.gzafiw(active, since 92s), standbys: np0005532586.thmvqb, np0005532583.orhywt, np0005532584.naxwxy Nov 23 04:53:45 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : overall HEALTH_OK Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:45 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:45 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:45 localhost ceph-mon[293353]: mon.np0005532584 calling monitor election Nov 23 04:53:45 localhost ceph-mon[293353]: mon.np0005532585 calling monitor election Nov 23 04:53:45 localhost ceph-mon[293353]: mon.np0005532586 calling monitor election Nov 23 04:53:45 localhost ceph-mon[293353]: mon.np0005532584 is new leader, mons np0005532584,np0005532585,np0005532586 in quorum (ranks 0,1,2) Nov 23 04:53:45 localhost ceph-mon[293353]: overall HEALTH_OK Nov 23 04:53:45 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:45 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:45 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e34: np0005532585.gzafiw(active, since 93s), standbys: np0005532586.thmvqb, np0005532583.orhywt, np0005532584.naxwxy Nov 23 04:53:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:53:46 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:53:46 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:46 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532586.thmvqb (monmap changed)... Nov 23 04:53:46 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:46 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:46 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain Nov 23 04:53:46 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:46 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:47 localhost podman[239764]: time="2025-11-23T09:53:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:53:47 localhost podman[239764]: @ - - [23/Nov/2025:09:53:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:53:47 localhost podman[239764]: @ - - [23/Nov/2025:09:53:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18225 "" "Go-http-client/1.1" Nov 23 04:53:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:53:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:53:48 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:53:48 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:49 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:49 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:49 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:53:49 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:53:49 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:53:49 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:53:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:53:49 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:53:49 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:53:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:53:49 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:53:49 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:49 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:53:49 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 04:53:49 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:50 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Nov 23 04:53:50 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:50 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:53:50 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:53:50 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:53:50 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:50 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:50 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:50 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:50 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:50 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:50 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:50 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:50 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:50 localhost podman[304055]: Nov 23 04:53:50 localhost podman[304055]: 2025-11-23 09:53:50.701069159 +0000 UTC m=+0.069623339 container create d4b138989f86cc9286c97227c1d68375991986fd4538c3cdc554213c600647fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_gould, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, version=7, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, ceph=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, release=553, io.openshift.expose-services=) Nov 23 04:53:50 localhost systemd[1]: Started libpod-conmon-d4b138989f86cc9286c97227c1d68375991986fd4538c3cdc554213c600647fa.scope. Nov 23 04:53:50 localhost systemd[1]: Started libcrun container. Nov 23 04:53:50 localhost podman[304055]: 2025-11-23 09:53:50.76852367 +0000 UTC m=+0.137077850 container init d4b138989f86cc9286c97227c1d68375991986fd4538c3cdc554213c600647fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_gould, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , ceph=True, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:53:50 localhost podman[304055]: 2025-11-23 09:53:50.67297534 +0000 UTC m=+0.041529490 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:53:50 localhost podman[304055]: 2025-11-23 09:53:50.779675631 +0000 UTC m=+0.148229811 container start d4b138989f86cc9286c97227c1d68375991986fd4538c3cdc554213c600647fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_gould, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, ceph=True, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.33.12, name=rhceph, vcs-type=git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Nov 23 04:53:50 localhost podman[304055]: 2025-11-23 09:53:50.779936669 +0000 UTC m=+0.148490839 container attach d4b138989f86cc9286c97227c1d68375991986fd4538c3cdc554213c600647fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_gould, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, name=rhceph, RELEASE=main, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, ceph=True, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:53:50 localhost keen_gould[304070]: 167 167 Nov 23 04:53:50 localhost systemd[1]: libpod-d4b138989f86cc9286c97227c1d68375991986fd4538c3cdc554213c600647fa.scope: Deactivated successfully. Nov 23 04:53:50 localhost podman[304055]: 2025-11-23 09:53:50.782554439 +0000 UTC m=+0.151108609 container died d4b138989f86cc9286c97227c1d68375991986fd4538c3cdc554213c600647fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_gould, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, maintainer=Guillaume Abrioux , name=rhceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container) Nov 23 04:53:50 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 04:53:50 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:50 localhost podman[304075]: 2025-11-23 09:53:50.894792489 +0000 UTC m=+0.097957635 container remove d4b138989f86cc9286c97227c1d68375991986fd4538c3cdc554213c600647fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_gould, CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, version=7, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.33.12, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:53:50 localhost systemd[1]: libpod-conmon-d4b138989f86cc9286c97227c1d68375991986fd4538c3cdc554213c600647fa.scope: Deactivated successfully. Nov 23 04:53:50 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:53:50 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:50 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:53:50 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:51 localhost ceph-mon[293353]: Reconfiguring crash.np0005532584 (monmap changed)... Nov 23 04:53:51 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain Nov 23 04:53:51 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:51 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:51 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:51 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 23 04:53:51 localhost podman[304144]: Nov 23 04:53:51 localhost podman[304144]: 2025-11-23 09:53:51.595270237 +0000 UTC m=+0.075891540 container create 596d59d66a7d02bbf495c918933e1bd82adbb3e5c7e687fe29abae4dba8944b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_feynman, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, version=7, description=Red Hat Ceph Storage 7, release=553, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, ceph=True) Nov 23 04:53:51 localhost systemd[1]: Started libpod-conmon-596d59d66a7d02bbf495c918933e1bd82adbb3e5c7e687fe29abae4dba8944b1.scope. Nov 23 04:53:51 localhost systemd[1]: Started libcrun container. Nov 23 04:53:51 localhost podman[304144]: 2025-11-23 09:53:51.658947123 +0000 UTC m=+0.139568426 container init 596d59d66a7d02bbf495c918933e1bd82adbb3e5c7e687fe29abae4dba8944b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_feynman, name=rhceph, GIT_CLEAN=True, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 04:53:51 localhost podman[304144]: 2025-11-23 09:53:51.565164327 +0000 UTC m=+0.045785650 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:53:51 localhost podman[304144]: 2025-11-23 09:53:51.667720191 +0000 UTC m=+0.148341494 container start 596d59d66a7d02bbf495c918933e1bd82adbb3e5c7e687fe29abae4dba8944b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_feynman, vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 23 04:53:51 localhost podman[304144]: 2025-11-23 09:53:51.668075082 +0000 UTC m=+0.148696425 container attach 596d59d66a7d02bbf495c918933e1bd82adbb3e5c7e687fe29abae4dba8944b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_feynman, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, maintainer=Guillaume Abrioux , version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vendor=Red Hat, Inc., ceph=True) Nov 23 04:53:51 localhost magical_feynman[304160]: 167 167 Nov 23 04:53:51 localhost systemd[1]: libpod-596d59d66a7d02bbf495c918933e1bd82adbb3e5c7e687fe29abae4dba8944b1.scope: Deactivated successfully. Nov 23 04:53:51 localhost podman[304144]: 2025-11-23 09:53:51.670475336 +0000 UTC m=+0.151096709 container died 596d59d66a7d02bbf495c918933e1bd82adbb3e5c7e687fe29abae4dba8944b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_feynman, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, ceph=True, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, architecture=x86_64, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, RELEASE=main, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, version=7, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:53:51 localhost systemd[1]: var-lib-containers-storage-overlay-30e3cdad5f7cc056adeac8b72f6d1000b0c63bdeda4e3e98e8895ce94c880caa-merged.mount: Deactivated successfully. Nov 23 04:53:51 localhost systemd[1]: var-lib-containers-storage-overlay-bd364777c8bc12209228dacd292b5302c039a0d417f6d51e375e708b4ee925ba-merged.mount: Deactivated successfully. Nov 23 04:53:51 localhost podman[304165]: 2025-11-23 09:53:51.768723628 +0000 UTC m=+0.088802405 container remove 596d59d66a7d02bbf495c918933e1bd82adbb3e5c7e687fe29abae4dba8944b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_feynman, version=7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, vendor=Red Hat, Inc., name=rhceph, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:53:51 localhost systemd[1]: libpod-conmon-596d59d66a7d02bbf495c918933e1bd82adbb3e5c7e687fe29abae4dba8944b1.scope: Deactivated successfully. Nov 23 04:53:51 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:53:51 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:51 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:53:51 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:52 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:53:52 localhost ceph-mon[293353]: Reconfiguring osd.2 (monmap changed)... Nov 23 04:53:52 localhost ceph-mon[293353]: Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:53:52 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:52 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:52 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0. Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:53:52.463862) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28 Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891632463912, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2800, "num_deletes": 252, "total_data_size": 4789313, "memory_usage": 5006560, "flush_reason": "Manual Compaction"} Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891632481607, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 3238097, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17393, "largest_seqno": 20188, "table_properties": {"data_size": 3226774, "index_size": 6804, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 31155, "raw_average_key_size": 22, "raw_value_size": 3201173, "raw_average_value_size": 2350, "num_data_blocks": 298, "num_entries": 1362, "num_filter_entries": 1362, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891559, "oldest_key_time": 1763891559, "file_creation_time": 1763891632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 17799 microseconds, and 9291 cpu microseconds. Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:53:52.481661) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 3238097 bytes OK Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:53:52.481686) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:53:52.483498) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:53:52.483518) EVENT_LOG_v1 {"time_micros": 1763891632483512, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:53:52.483540) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 4775869, prev total WAL file size 4775869, number of live WAL files 2. Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:53:52.485704) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131323935' seq:72057594037927935, type:22 .. '7061786F73003131353437' seq:0, type:0; will stop at (end) Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(3162KB)], [27(16MB)] Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891632485757, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 20142432, "oldest_snapshot_seqno": -1} Nov 23 04:53:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 11903 keys, 17224754 bytes, temperature: kUnknown Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891632557940, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 17224754, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17154447, "index_size": 39486, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29765, "raw_key_size": 318287, "raw_average_key_size": 26, "raw_value_size": 16949063, "raw_average_value_size": 1423, "num_data_blocks": 1513, "num_entries": 11903, "num_filter_entries": 11903, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763891632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:53:52.558360) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 17224754 bytes Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:53:52.560230) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 278.7 rd, 238.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 16.1 +0.0 blob) out(16.4 +0.0 blob), read-write-amplify(11.5) write-amplify(5.3) OK, records in: 12445, records dropped: 542 output_compression: NoCompression Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:53:52.560266) EVENT_LOG_v1 {"time_micros": 1763891632560251, "job": 14, "event": "compaction_finished", "compaction_time_micros": 72280, "compaction_time_cpu_micros": 41920, "output_level": 6, "num_output_files": 1, "total_output_size": 17224754, "num_input_records": 12445, "num_output_records": 11903, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891632560824, "job": 14, "event": "table_file_deletion", "file_number": 29} Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891632563401, "job": 14, "event": "table_file_deletion", "file_number": 27} Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:53:52.484528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:53:52.563649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:53:52.563658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:53:52.563662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:53:52.563825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:53:52 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:53:52.563830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:53:52 localhost podman[304240]: 2025-11-23 09:53:52.63294764 +0000 UTC m=+0.111998824 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Nov 23 04:53:52 localhost podman[304248]: Nov 23 04:53:52 localhost podman[304240]: 2025-11-23 09:53:52.676354807 +0000 UTC m=+0.155405961 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6) Nov 23 04:53:52 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:53:52 localhost podman[304248]: 2025-11-23 09:53:52.693679616 +0000 UTC m=+0.149502899 container create 204d6548dd31277078d6ce75d54b732049d995660daf321bf46fb5b77c5404ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_torvalds, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, io.buildah.version=1.33.12, version=7, com.redhat.component=rhceph-container, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph) Nov 23 04:53:52 localhost systemd[1]: Started libpod-conmon-204d6548dd31277078d6ce75d54b732049d995660daf321bf46fb5b77c5404ff.scope. Nov 23 04:53:52 localhost systemd[1]: Started libcrun container. Nov 23 04:53:52 localhost podman[304248]: 2025-11-23 09:53:52.756072903 +0000 UTC m=+0.211896176 container init 204d6548dd31277078d6ce75d54b732049d995660daf321bf46fb5b77c5404ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_torvalds, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, architecture=x86_64, version=7, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 23 04:53:52 localhost podman[304248]: 2025-11-23 09:53:52.657630965 +0000 UTC m=+0.113454278 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:53:52 localhost podman[304248]: 2025-11-23 09:53:52.765800751 +0000 UTC m=+0.221624024 container start 204d6548dd31277078d6ce75d54b732049d995660daf321bf46fb5b77c5404ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_torvalds, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., RELEASE=main, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, ceph=True, vcs-type=git, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, architecture=x86_64, description=Red Hat Ceph Storage 7) Nov 23 04:53:52 localhost podman[304248]: 2025-11-23 09:53:52.76609889 +0000 UTC m=+0.221922163 container attach 204d6548dd31277078d6ce75d54b732049d995660daf321bf46fb5b77c5404ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_torvalds, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=553, distribution-scope=public, io.openshift.expose-services=, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, build-date=2025-09-24T08:57:55, name=rhceph) Nov 23 04:53:52 localhost silly_torvalds[304275]: 167 167 Nov 23 04:53:52 localhost systemd[1]: libpod-204d6548dd31277078d6ce75d54b732049d995660daf321bf46fb5b77c5404ff.scope: Deactivated successfully. Nov 23 04:53:52 localhost podman[304248]: 2025-11-23 09:53:52.770352589 +0000 UTC m=+0.226175932 container died 204d6548dd31277078d6ce75d54b732049d995660daf321bf46fb5b77c5404ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_torvalds, GIT_CLEAN=True, io.buildah.version=1.33.12, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, RELEASE=main, name=rhceph, com.redhat.component=rhceph-container, architecture=x86_64, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553) Nov 23 04:53:52 localhost podman[304280]: 2025-11-23 09:53:52.851342234 +0000 UTC m=+0.071549037 container remove 204d6548dd31277078d6ce75d54b732049d995660daf321bf46fb5b77c5404ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_torvalds, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, release=553, vcs-type=git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=) Nov 23 04:53:52 localhost systemd[1]: libpod-conmon-204d6548dd31277078d6ce75d54b732049d995660daf321bf46fb5b77c5404ff.scope: Deactivated successfully. Nov 23 04:53:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:53:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:53:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Nov 23 04:53:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:53:53 localhost ceph-mon[293353]: Reconfiguring osd.5 (monmap changed)... Nov 23 04:53:53 localhost ceph-mon[293353]: Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:53:53 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:53 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:53 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:53:53 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)... Nov 23 04:53:53 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:53:53 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain Nov 23 04:53:53 localhost podman[304356]: Nov 23 04:53:53 localhost podman[304356]: 2025-11-23 09:53:53.672366426 +0000 UTC m=+0.073608190 container create c0f48fc9b3d706adf8b97bb8a460900f60dd7656962e275e0593265aa158f201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_jang, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, release=553, name=rhceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.openshift.expose-services=, ceph=True) Nov 23 04:53:53 localhost systemd[1]: Started libpod-conmon-c0f48fc9b3d706adf8b97bb8a460900f60dd7656962e275e0593265aa158f201.scope. Nov 23 04:53:53 localhost systemd[1]: Started libcrun container. Nov 23 04:53:53 localhost podman[304356]: 2025-11-23 09:53:53.735565928 +0000 UTC m=+0.136807712 container init c0f48fc9b3d706adf8b97bb8a460900f60dd7656962e275e0593265aa158f201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_jang, vendor=Red Hat, Inc., release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, RELEASE=main, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, ceph=True, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:53:53 localhost systemd[1]: var-lib-containers-storage-overlay-70cab6aa758874a2628bb0d72a24d624fc02477e4dd2551454011b52ac2f424f-merged.mount: Deactivated successfully. Nov 23 04:53:53 localhost podman[304356]: 2025-11-23 09:53:53.643049211 +0000 UTC m=+0.044291005 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:53:53 localhost podman[304356]: 2025-11-23 09:53:53.744820581 +0000 UTC m=+0.146062345 container start c0f48fc9b3d706adf8b97bb8a460900f60dd7656962e275e0593265aa158f201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_jang, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, version=7, distribution-scope=public, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, ceph=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, name=rhceph) Nov 23 04:53:53 localhost podman[304356]: 2025-11-23 09:53:53.745073608 +0000 UTC m=+0.146315382 container attach c0f48fc9b3d706adf8b97bb8a460900f60dd7656962e275e0593265aa158f201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_jang, ceph=True, GIT_CLEAN=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph) Nov 23 04:53:53 localhost dazzling_jang[304371]: 167 167 Nov 23 04:53:53 localhost systemd[1]: libpod-c0f48fc9b3d706adf8b97bb8a460900f60dd7656962e275e0593265aa158f201.scope: Deactivated successfully. Nov 23 04:53:53 localhost podman[304356]: 2025-11-23 09:53:53.747772801 +0000 UTC m=+0.149014565 container died c0f48fc9b3d706adf8b97bb8a460900f60dd7656962e275e0593265aa158f201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_jang, GIT_CLEAN=True, name=rhceph, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, ceph=True, GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=553, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 04:53:53 localhost systemd[1]: var-lib-containers-storage-overlay-5852fa642ade221d2db8201b19b80e91774fa3385bdd7ee3de116baf8d71506f-merged.mount: Deactivated successfully. Nov 23 04:53:53 localhost podman[304376]: 2025-11-23 09:53:53.841272029 +0000 UTC m=+0.084379050 container remove c0f48fc9b3d706adf8b97bb8a460900f60dd7656962e275e0593265aa158f201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_jang, version=7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, ceph=True, build-date=2025-09-24T08:57:55, name=rhceph, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_BRANCH=main) Nov 23 04:53:53 localhost systemd[1]: libpod-conmon-c0f48fc9b3d706adf8b97bb8a460900f60dd7656962e275e0593265aa158f201.scope: Deactivated successfully. Nov 23 04:53:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:53:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:53:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Nov 23 04:53:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:54 localhost podman[304447]: Nov 23 04:53:54 localhost podman[304447]: 2025-11-23 09:53:54.548635017 +0000 UTC m=+0.087275198 container create bf0ccdeeeda13044d7df6b02617e32594e1e0de8ce22a23f5b1c29422d509374 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_noyce, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, vcs-type=git, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, ceph=True, GIT_CLEAN=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7) Nov 23 04:53:54 localhost systemd[1]: Started libpod-conmon-bf0ccdeeeda13044d7df6b02617e32594e1e0de8ce22a23f5b1c29422d509374.scope. Nov 23 04:53:54 localhost systemd[1]: Started libcrun container. Nov 23 04:53:54 localhost podman[304447]: 2025-11-23 09:53:54.510645446 +0000 UTC m=+0.049285617 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:53:54 localhost podman[304447]: 2025-11-23 09:53:54.612179259 +0000 UTC m=+0.150819430 container init bf0ccdeeeda13044d7df6b02617e32594e1e0de8ce22a23f5b1c29422d509374 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_noyce, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, RELEASE=main, vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, CEPH_POINT_RELEASE=, architecture=x86_64, ceph=True, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:53:54 localhost podman[304447]: 2025-11-23 09:53:54.621134703 +0000 UTC m=+0.159774874 container start bf0ccdeeeda13044d7df6b02617e32594e1e0de8ce22a23f5b1c29422d509374 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_noyce, distribution-scope=public, name=rhceph, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_CLEAN=True, ceph=True, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:53:54 localhost sad_noyce[304462]: 167 167 Nov 23 04:53:54 localhost podman[304447]: 2025-11-23 09:53:54.623091503 +0000 UTC m=+0.161731674 container attach bf0ccdeeeda13044d7df6b02617e32594e1e0de8ce22a23f5b1c29422d509374 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_noyce, GIT_CLEAN=True, name=rhceph, version=7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, distribution-scope=public, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.openshift.expose-services=) Nov 23 04:53:54 localhost systemd[1]: libpod-bf0ccdeeeda13044d7df6b02617e32594e1e0de8ce22a23f5b1c29422d509374.scope: Deactivated successfully. Nov 23 04:53:54 localhost podman[304447]: 2025-11-23 09:53:54.626717223 +0000 UTC m=+0.165357424 container died bf0ccdeeeda13044d7df6b02617e32594e1e0de8ce22a23f5b1c29422d509374 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_noyce, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , architecture=x86_64, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, com.redhat.component=rhceph-container, ceph=True) Nov 23 04:53:54 localhost podman[304468]: 2025-11-23 09:53:54.750721723 +0000 UTC m=+0.117981607 container remove bf0ccdeeeda13044d7df6b02617e32594e1e0de8ce22a23f5b1c29422d509374 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_noyce, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, CEPH_POINT_RELEASE=, RELEASE=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., name=rhceph, io.openshift.expose-services=, version=7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , architecture=x86_64) Nov 23 04:53:54 localhost systemd[1]: libpod-conmon-bf0ccdeeeda13044d7df6b02617e32594e1e0de8ce22a23f5b1c29422d509374.scope: Deactivated successfully. Nov 23 04:53:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:53:54 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:53:54 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Nov 23 04:53:54 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:54 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:54 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:54 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:54 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)... Nov 23 04:53:54 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:54 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain Nov 23 04:53:54 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:54 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:54 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:54 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:53:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:53:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:53:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:53:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:53:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:53:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:53:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:53:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:53:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:53:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:53:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:53:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:53:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:53:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:53:55 localhost ceph-mon[293353]: Reconfiguring crash.np0005532585 (monmap changed)... Nov 23 04:53:55 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain Nov 23 04:53:55 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 23 04:53:55 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:53:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:53:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:53:56 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:53:56 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:53:56 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:53:56 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:56 localhost podman[304484]: 2025-11-23 09:53:56.913309755 +0000 UTC m=+0.095753007 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:53:56 localhost podman[304485]: 2025-11-23 09:53:56.961062384 +0000 UTC m=+0.143668821 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:53:56 localhost ceph-mon[293353]: Reconfig service osd.default_drive_group Nov 23 04:53:56 localhost ceph-mon[293353]: Reconfiguring osd.0 (monmap changed)... Nov 23 04:53:56 localhost ceph-mon[293353]: Reconfiguring daemon osd.0 on np0005532585.localdomain Nov 23 04:53:56 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:56 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:56 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:56 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:56 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:56 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:56 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 23 04:53:56 localhost podman[304485]: 2025-11-23 09:53:56.971803223 +0000 UTC m=+0.154409640 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:53:56 localhost podman[304484]: 2025-11-23 09:53:56.974944988 +0000 UTC m=+0.157388250 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:53:56 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:53:57 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:53:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:53:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:53:57 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:53:57 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:53:57 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:53:57 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Nov 23 04:53:57 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:53:58 localhost ceph-mon[293353]: Reconfiguring osd.3 (monmap changed)... Nov 23 04:53:58 localhost ceph-mon[293353]: Reconfiguring daemon osd.3 on np0005532585.localdomain Nov 23 04:53:58 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:58 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:58 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:58 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:58 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:53:58 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:53:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:53:58 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:53:58 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Nov 23 04:53:58 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:59 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)... Nov 23 04:53:59 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain Nov 23 04:53:59 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:59 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:59 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:59 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 23 04:53:59 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:53:59 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:59 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:53:59 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:53:59 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Nov 23 04:53:59 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:54:00 localhost ceph-mon[293353]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)... Nov 23 04:54:00 localhost ceph-mon[293353]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain Nov 23 04:54:00 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:54:00 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:54:00 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:54:00 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 23 04:54:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 04:54:00 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1389503190' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 04:54:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 04:54:00 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1389503190' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 04:54:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:54:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:54:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:54:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:54:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mgr fail"} v 0) Nov 23 04:54:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 23 04:54:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e85 do_prune osdmap full prune enabled Nov 23 04:54:00 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : Activating manager daemon np0005532586.thmvqb Nov 23 04:54:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 e86: 6 total, 6 up, 6 in Nov 23 04:54:00 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e86: 6 total, 6 up, 6 in Nov 23 04:54:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Nov 23 04:54:00 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e35: np0005532586.thmvqb(active, starting, since 0.0355882s), standbys: np0005532583.orhywt, np0005532584.naxwxy Nov 23 04:54:00 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : Manager daemon np0005532586.thmvqb is now available Nov 23 04:54:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"} v 0) Nov 23 04:54:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"} : dispatch Nov 23 04:54:00 localhost systemd[1]: session-69.scope: Deactivated successfully. Nov 23 04:54:00 localhost systemd[1]: session-69.scope: Consumed 28.524s CPU time. Nov 23 04:54:00 localhost systemd-logind[760]: Session 69 logged out. Waiting for processes to exit. Nov 23 04:54:00 localhost systemd-logind[760]: Removed session 69. Nov 23 04:54:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"}]': finished Nov 23 04:54:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"} v 0) Nov 23 04:54:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"} : dispatch Nov 23 04:54:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"}]': finished Nov 23 04:54:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532586.thmvqb/mirror_snapshot_schedule"} v 0) Nov 23 04:54:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532586.thmvqb/mirror_snapshot_schedule"} : dispatch Nov 23 04:54:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532586.thmvqb/trash_purge_schedule"} v 0) Nov 23 04:54:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532586.thmvqb/trash_purge_schedule"} : dispatch Nov 23 04:54:00 localhost sshd[304524]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:54:01 localhost systemd-logind[760]: New session 70 of user ceph-admin. Nov 23 04:54:01 localhost systemd[1]: Started Session 70 of User ceph-admin. Nov 23 04:54:01 localhost nova_compute[280939]: 2025-11-23 09:54:01.160 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:54:01 localhost ceph-mon[293353]: Reconfiguring crash.np0005532586 (monmap changed)... Nov 23 04:54:01 localhost ceph-mon[293353]: Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain Nov 23 04:54:01 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:54:01 localhost ceph-mon[293353]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' Nov 23 04:54:01 localhost ceph-mon[293353]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 23 04:54:01 localhost ceph-mon[293353]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 23 04:54:01 localhost ceph-mon[293353]: Activating manager daemon np0005532586.thmvqb Nov 23 04:54:01 localhost ceph-mon[293353]: from='client.? 172.18.0.200:0/3957521171' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 23 04:54:01 localhost ceph-mon[293353]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Nov 23 04:54:01 localhost ceph-mon[293353]: Manager daemon np0005532586.thmvqb is now available Nov 23 04:54:01 localhost ceph-mon[293353]: removing stray HostCache host record np0005532583.localdomain.devices.0 Nov 23 04:54:01 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"} : dispatch Nov 23 04:54:01 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"} : dispatch Nov 23 04:54:01 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"}]': finished Nov 23 04:54:01 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"} : dispatch Nov 23 04:54:01 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"} : dispatch Nov 23 04:54:01 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"}]': finished Nov 23 04:54:01 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532586.thmvqb/mirror_snapshot_schedule"} : dispatch Nov 23 04:54:01 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532586.thmvqb/mirror_snapshot_schedule"} : dispatch Nov 23 04:54:01 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532586.thmvqb/trash_purge_schedule"} : dispatch Nov 23 04:54:01 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532586.thmvqb/trash_purge_schedule"} : dispatch Nov 23 04:54:01 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e36: np0005532586.thmvqb(active, since 1.05257s), standbys: np0005532583.orhywt, np0005532584.naxwxy Nov 23 04:54:02 localhost podman[304636]: 2025-11-23 09:54:02.118184022 +0000 UTC m=+0.091148516 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, name=rhceph, io.openshift.expose-services=, GIT_BRANCH=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, version=7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , release=553, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 23 04:54:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:54:02 localhost podman[304636]: 2025-11-23 09:54:02.254407736 +0000 UTC m=+0.227372180 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, build-date=2025-09-24T08:57:55, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, release=553, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, RELEASE=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 04:54:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:54:02 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:54:02 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:54:02 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:54:02 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:54:02 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:54:02 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:03 localhost nova_compute[280939]: 2025-11-23 09:54:03.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:54:03 localhost nova_compute[280939]: 2025-11-23 09:54:03.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:54:03 localhost nova_compute[280939]: 2025-11-23 09:54:03.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:54:03 localhost nova_compute[280939]: 2025-11-23 09:54:03.146 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:54:03 localhost nova_compute[280939]: 2025-11-23 09:54:03.146 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:54:03 localhost nova_compute[280939]: 2025-11-23 09:54:03.147 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:54:03 localhost nova_compute[280939]: 2025-11-23 09:54:03.147 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:54:03 localhost nova_compute[280939]: 2025-11-23 09:54:03.148 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:54:03 localhost ceph-mon[293353]: [23/Nov/2025:09:54:01] ENGINE Bus STARTING Nov 23 04:54:03 localhost ceph-mon[293353]: [23/Nov/2025:09:54:02] ENGINE Serving on http://172.18.0.108:8765 Nov 23 04:54:03 localhost ceph-mon[293353]: [23/Nov/2025:09:54:02] ENGINE Serving on https://172.18.0.108:7150 Nov 23 04:54:03 localhost ceph-mon[293353]: [23/Nov/2025:09:54:02] ENGINE Bus STARTED Nov 23 04:54:03 localhost ceph-mon[293353]: [23/Nov/2025:09:54:02] ENGINE Client ('172.18.0.108', 35224) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 23 04:54:03 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:03 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:03 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:03 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:03 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:03 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:03 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e37: np0005532586.thmvqb(active, since 3s), standbys: np0005532583.orhywt, np0005532584.naxwxy Nov 23 04:54:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:54:04 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:54:04 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:54:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:54:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Nov 23 04:54:04 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:04 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:54:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:54:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Nov 23 04:54:04 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:04 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Nov 23 04:54:04 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Nov 23 04:54:04 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Nov 23 04:54:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Nov 23 04:54:04 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Nov 23 04:54:04 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Nov 23 04:54:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 23 04:54:04 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:54:05 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : Standby manager daemon np0005532585.gzafiw started Nov 23 04:54:05 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532585.localdomain to 836.6M Nov 23 04:54:05 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532584.localdomain to 836.6M Nov 23 04:54:05 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:54:05 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532586.localdomain to 836.6M Nov 23 04:54:05 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:54:05 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Nov 23 04:54:05 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:54:05 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:54:05 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:54:06 localhost nova_compute[280939]: 2025-11-23 09:54:06.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:54:06 localhost nova_compute[280939]: 2025-11-23 09:54:06.134 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:54:06 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e38: np0005532586.thmvqb(active, since 5s), standbys: np0005532583.orhywt, np0005532584.naxwxy, np0005532585.gzafiw Nov 23 04:54:06 localhost openstack_network_exporter[241732]: ERROR 09:54:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:54:06 localhost openstack_network_exporter[241732]: ERROR 09:54:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:54:06 localhost openstack_network_exporter[241732]: ERROR 09:54:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:54:06 localhost openstack_network_exporter[241732]: ERROR 09:54:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:54:06 localhost openstack_network_exporter[241732]: Nov 23 04:54:06 localhost openstack_network_exporter[241732]: ERROR 09:54:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:54:06 localhost openstack_network_exporter[241732]: Nov 23 04:54:06 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:54:06 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:54:06 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:54:06 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:54:06 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:54:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:54:07 localhost podman[305483]: 2025-11-23 09:54:07.064784229 +0000 UTC m=+0.080316616 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 04:54:07 localhost podman[305483]: 2025-11-23 09:54:07.073291699 +0000 UTC m=+0.088824066 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 23 04:54:07 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:54:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:54:07 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:54:07 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:54:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:54:07 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:54:07 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:54:07 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:54:07 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 04:54:07 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:07 localhost ceph-mon[293353]: log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Nov 23 04:54:07 localhost ceph-mon[293353]: log_channel(cluster) log [WRN] : Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Nov 23 04:54:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:54:07 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4225034591' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:54:07 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:54:07 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:54:07 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:54:07 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:54:07 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:07 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:07 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:07 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:07 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:07 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:07 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:07 localhost ceph-mon[293353]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Nov 23 04:54:07 localhost ceph-mon[293353]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Nov 23 04:54:07 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 23 04:54:08 localhost nova_compute[280939]: 2025-11-23 09:54:08.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:54:08 localhost nova_compute[280939]: 2025-11-23 09:54:08.156 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:54:08 localhost nova_compute[280939]: 2025-11-23 09:54:08.156 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:54:08 localhost nova_compute[280939]: 2025-11-23 09:54:08.156 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:54:08 localhost nova_compute[280939]: 2025-11-23 09:54:08.157 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:54:08 localhost nova_compute[280939]: 2025-11-23 09:54:08.157 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:54:08 localhost podman[305606]: Nov 23 04:54:08 localhost podman[305606]: 2025-11-23 09:54:08.300947628 +0000 UTC m=+0.068966968 container create f377117c320331875980ba631647d89abfb39b598d6fb72b771d930fb88261fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_feistel, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, architecture=x86_64, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., ceph=True, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:54:08 localhost systemd[1]: Started libpod-conmon-f377117c320331875980ba631647d89abfb39b598d6fb72b771d930fb88261fa.scope. Nov 23 04:54:08 localhost podman[305606]: 2025-11-23 09:54:08.275231652 +0000 UTC m=+0.043250942 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:54:08 localhost systemd[1]: Started libcrun container. Nov 23 04:54:08 localhost podman[305606]: 2025-11-23 09:54:08.392400153 +0000 UTC m=+0.160419453 container init f377117c320331875980ba631647d89abfb39b598d6fb72b771d930fb88261fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_feistel, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, maintainer=Guillaume Abrioux , io.openshift.expose-services=, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git) Nov 23 04:54:08 localhost podman[305606]: 2025-11-23 09:54:08.403142322 +0000 UTC m=+0.171161612 container start f377117c320331875980ba631647d89abfb39b598d6fb72b771d930fb88261fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_feistel, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, RELEASE=main, ceph=True, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.33.12, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_CLEAN=True) Nov 23 04:54:08 localhost podman[305606]: 2025-11-23 09:54:08.403377609 +0000 UTC m=+0.171396899 container attach f377117c320331875980ba631647d89abfb39b598d6fb72b771d930fb88261fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_feistel, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, RELEASE=main, build-date=2025-09-24T08:57:55, distribution-scope=public, vcs-type=git, GIT_BRANCH=main, ceph=True, maintainer=Guillaume Abrioux , name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.openshift.expose-services=, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7) Nov 23 04:54:08 localhost dazzling_feistel[305638]: 167 167 Nov 23 04:54:08 localhost systemd[1]: libpod-f377117c320331875980ba631647d89abfb39b598d6fb72b771d930fb88261fa.scope: Deactivated successfully. Nov 23 04:54:08 localhost podman[305606]: 2025-11-23 09:54:08.405743391 +0000 UTC m=+0.173762711 container died f377117c320331875980ba631647d89abfb39b598d6fb72b771d930fb88261fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_feistel, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, release=553, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:54:08 localhost podman[305643]: 2025-11-23 09:54:08.496637569 +0000 UTC m=+0.079244853 container remove f377117c320331875980ba631647d89abfb39b598d6fb72b771d930fb88261fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_feistel, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, release=553, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git) Nov 23 04:54:08 localhost systemd[1]: libpod-conmon-f377117c320331875980ba631647d89abfb39b598d6fb72b771d930fb88261fa.scope: Deactivated successfully. Nov 23 04:54:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:54:08 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2315736127' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:54:08 localhost nova_compute[280939]: 2025-11-23 09:54:08.572 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:54:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:54:08 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:54:08 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:54:08 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:54:08 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:08 localhost nova_compute[280939]: 2025-11-23 09:54:08.762 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:54:08 localhost nova_compute[280939]: 2025-11-23 09:54:08.764 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12297MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:54:08 localhost nova_compute[280939]: 2025-11-23 09:54:08.764 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:54:08 localhost nova_compute[280939]: 2025-11-23 09:54:08.765 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:54:08 localhost nova_compute[280939]: 2025-11-23 09:54:08.871 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:54:08 localhost nova_compute[280939]: 2025-11-23 09:54:08.872 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:54:08 localhost nova_compute[280939]: 2025-11-23 09:54:08.895 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:54:08 localhost ceph-mon[293353]: Reconfiguring daemon osd.2 on np0005532584.localdomain Nov 23 04:54:08 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:08 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:08 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:08 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 23 04:54:08 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:54:09 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3629266017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:54:09 localhost systemd[1]: var-lib-containers-storage-overlay-a4f140aba12f5ee99db821ad93497eb0839b086c73b159f41c5ab6c678293619-merged.mount: Deactivated successfully. Nov 23 04:54:09 localhost nova_compute[280939]: 2025-11-23 09:54:09.310 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:54:09 localhost nova_compute[280939]: 2025-11-23 09:54:09.316 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:54:09 localhost nova_compute[280939]: 2025-11-23 09:54:09.334 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:54:09 localhost nova_compute[280939]: 2025-11-23 09:54:09.336 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:54:09 localhost podman[305742]: Nov 23 04:54:09 localhost nova_compute[280939]: 2025-11-23 09:54:09.336 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:54:09 localhost podman[305742]: 2025-11-23 09:54:09.34740449 +0000 UTC m=+0.077084137 container create 46d5ce376c87a76ae53baa3e971e0272d725f78bce3ee5131324ebd633db6a86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_gagarin, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, RELEASE=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, name=rhceph, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, release=553, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_CLEAN=True) Nov 23 04:54:09 localhost systemd[1]: Started libpod-conmon-46d5ce376c87a76ae53baa3e971e0272d725f78bce3ee5131324ebd633db6a86.scope. Nov 23 04:54:09 localhost systemd[1]: Started libcrun container. Nov 23 04:54:09 localhost podman[305742]: 2025-11-23 09:54:09.405871377 +0000 UTC m=+0.135551024 container init 46d5ce376c87a76ae53baa3e971e0272d725f78bce3ee5131324ebd633db6a86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_gagarin, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, architecture=x86_64, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 23 04:54:09 localhost podman[305742]: 2025-11-23 09:54:09.414400117 +0000 UTC m=+0.144079774 container start 46d5ce376c87a76ae53baa3e971e0272d725f78bce3ee5131324ebd633db6a86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_gagarin, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , ceph=True, architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-type=git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:54:09 localhost podman[305742]: 2025-11-23 09:54:09.414644695 +0000 UTC m=+0.144324342 container attach 46d5ce376c87a76ae53baa3e971e0272d725f78bce3ee5131324ebd633db6a86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_gagarin, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, build-date=2025-09-24T08:57:55, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, maintainer=Guillaume Abrioux , name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, distribution-scope=public, ceph=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 23 04:54:09 localhost podman[305742]: 2025-11-23 09:54:09.316214707 +0000 UTC m=+0.045894394 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:54:09 localhost ecstatic_gagarin[305759]: 167 167 Nov 23 04:54:09 localhost systemd[1]: libpod-46d5ce376c87a76ae53baa3e971e0272d725f78bce3ee5131324ebd633db6a86.scope: Deactivated successfully. Nov 23 04:54:09 localhost podman[305742]: 2025-11-23 09:54:09.419633068 +0000 UTC m=+0.149312735 container died 46d5ce376c87a76ae53baa3e971e0272d725f78bce3ee5131324ebd633db6a86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_gagarin, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.buildah.version=1.33.12, distribution-scope=public, version=7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, architecture=x86_64, description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, com.redhat.component=rhceph-container) Nov 23 04:54:09 localhost podman[305764]: 2025-11-23 09:54:09.51362927 +0000 UTC m=+0.082000047 container remove 46d5ce376c87a76ae53baa3e971e0272d725f78bce3ee5131324ebd633db6a86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_gagarin, CEPH_POINT_RELEASE=, release=553, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, RELEASE=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, version=7, GIT_BRANCH=main, architecture=x86_64, build-date=2025-09-24T08:57:55, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 23 04:54:09 localhost systemd[1]: libpod-conmon-46d5ce376c87a76ae53baa3e971e0272d725f78bce3ee5131324ebd633db6a86.scope: Deactivated successfully. Nov 23 04:54:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:54:09 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:54:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:54:09.736 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:54:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:54:09.737 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:54:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:54:09.737 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:54:09 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:54:09 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:54:09 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:09 localhost ceph-mon[293353]: Reconfiguring daemon osd.5 on np0005532584.localdomain Nov 23 04:54:09 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:09 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:09 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:09 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:09 localhost ceph-mon[293353]: Reconfiguring osd.1 (monmap changed)... Nov 23 04:54:09 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 23 04:54:09 localhost ceph-mon[293353]: Reconfiguring daemon osd.1 on np0005532586.localdomain Nov 23 04:54:10 localhost systemd[1]: var-lib-containers-storage-overlay-94d216b2e10da7c733cfbfe4bff0e5d54f4acd22c8ea1f73103b593afa870290-merged.mount: Deactivated successfully. Nov 23 04:54:10 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:54:10 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:10 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 04:54:10 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:54:10 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:10 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:10 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:54:10 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:10 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:54:10 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:11 localhost nova_compute[280939]: 2025-11-23 09:54:11.337 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:54:11 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:11 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:11 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:11 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:11 localhost ceph-mon[293353]: Reconfiguring osd.4 (monmap changed)... Nov 23 04:54:11 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 23 04:54:11 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:11 localhost ceph-mon[293353]: Reconfiguring daemon osd.4 on np0005532586.localdomain Nov 23 04:54:11 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:54:11 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:11 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:54:11 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:11 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:54:11 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:11 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:54:11 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:11 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Nov 23 04:54:11 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:54:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:54:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:54:12 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:54:12 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 04:54:12 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:12 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:12 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:12 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:12 localhost ceph-mon[293353]: Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)... Nov 23 04:54:12 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:54:12 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:12 localhost ceph-mon[293353]: Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain Nov 23 04:54:12 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 23 04:54:12 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:12 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:12 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:54:12 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:54:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:54:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:54:13 localhost systemd[1]: tmp-crun.w44syb.mount: Deactivated successfully. Nov 23 04:54:13 localhost podman[305808]: 2025-11-23 09:54:13.91718407 +0000 UTC m=+0.091268119 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2) Nov 23 04:54:14 localhost podman[305807]: 2025-11-23 09:54:14.00323372 +0000 UTC m=+0.181332083 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:54:14 localhost podman[305807]: 2025-11-23 09:54:14.016150544 +0000 UTC m=+0.194248937 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:54:14 localhost podman[305806]: 2025-11-23 09:54:14.026105699 +0000 UTC m=+0.204088158 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:54:14 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:54:14 localhost podman[305806]: 2025-11-23 09:54:14.043341936 +0000 UTC m=+0.221324405 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2) Nov 23 04:54:14 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:54:14 localhost podman[305808]: 2025-11-23 09:54:14.067471904 +0000 UTC m=+0.241556013 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 04:54:14 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:54:15 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Nov 23 04:54:15 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:15 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 04:54:15 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:15 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Nov 23 04:54:15 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:15 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 04:54:15 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:16 localhost podman[305943]: Nov 23 04:54:16 localhost podman[305943]: 2025-11-23 09:54:16.125582802 +0000 UTC m=+0.073463666 container create f15646241725cfc78c6dc2cbc7455dfee3a559767454dc9069000487609af0c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_lamarr, io.buildah.version=1.33.12, RELEASE=main, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_BRANCH=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc.) Nov 23 04:54:16 localhost systemd[1]: Started libpod-conmon-f15646241725cfc78c6dc2cbc7455dfee3a559767454dc9069000487609af0c2.scope. Nov 23 04:54:16 localhost systemd[1]: Started libcrun container. Nov 23 04:54:16 localhost podman[305943]: 2025-11-23 09:54:16.193512819 +0000 UTC m=+0.141393673 container init f15646241725cfc78c6dc2cbc7455dfee3a559767454dc9069000487609af0c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_lamarr, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12) Nov 23 04:54:16 localhost podman[305943]: 2025-11-23 09:54:16.098700981 +0000 UTC m=+0.046581865 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 23 04:54:16 localhost podman[305943]: 2025-11-23 09:54:16.204726502 +0000 UTC m=+0.152607356 container start f15646241725cfc78c6dc2cbc7455dfee3a559767454dc9069000487609af0c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_lamarr, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, release=553, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, RELEASE=main, io.buildah.version=1.33.12, distribution-scope=public, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, maintainer=Guillaume Abrioux , version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=) Nov 23 04:54:16 localhost podman[305943]: 2025-11-23 09:54:16.2049913 +0000 UTC m=+0.152872154 container attach f15646241725cfc78c6dc2cbc7455dfee3a559767454dc9069000487609af0c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_lamarr, io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.33.12, RELEASE=main, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, release=553, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, version=7) Nov 23 04:54:16 localhost lucid_lamarr[305958]: 167 167 Nov 23 04:54:16 localhost systemd[1]: libpod-f15646241725cfc78c6dc2cbc7455dfee3a559767454dc9069000487609af0c2.scope: Deactivated successfully. Nov 23 04:54:16 localhost podman[305943]: 2025-11-23 09:54:16.209033693 +0000 UTC m=+0.156914557 container died f15646241725cfc78c6dc2cbc7455dfee3a559767454dc9069000487609af0c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_lamarr, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, build-date=2025-09-24T08:57:55, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 23 04:54:16 localhost ceph-mon[293353]: Saving service mon spec with placement label:mon Nov 23 04:54:16 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:16 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:54:16 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:16 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:16 localhost ceph-mon[293353]: Reconfiguring mon.np0005532584 (monmap changed)... Nov 23 04:54:16 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:54:16 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain Nov 23 04:54:16 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:16 localhost podman[305963]: 2025-11-23 09:54:16.301275372 +0000 UTC m=+0.083302227 container remove f15646241725cfc78c6dc2cbc7455dfee3a559767454dc9069000487609af0c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_lamarr, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, version=7, maintainer=Guillaume Abrioux , name=rhceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, ceph=True, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 04:54:16 localhost systemd[1]: libpod-conmon-f15646241725cfc78c6dc2cbc7455dfee3a559767454dc9069000487609af0c2.scope: Deactivated successfully. Nov 23 04:54:16 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:54:16 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:16 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:54:16 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:17 localhost podman[239764]: time="2025-11-23T09:54:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:54:17 localhost podman[239764]: @ - - [23/Nov/2025:09:54:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:54:17 localhost systemd[1]: var-lib-containers-storage-overlay-953ea694284b36d3aae931dbe86ecfcef439fa774b7679fcc79e0e971425ec2f-merged.mount: Deactivated successfully. Nov 23 04:54:17 localhost podman[239764]: @ - - [23/Nov/2025:09:54:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18223 "" "Go-http-client/1.1" Nov 23 04:54:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:54:17 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e39: np0005532586.thmvqb(active, since 16s), standbys: np0005532584.naxwxy, np0005532585.gzafiw Nov 23 04:54:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:54:17 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:54:17 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:17 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:17 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:17 localhost ceph-mon[293353]: Reconfiguring mon.np0005532585 (monmap changed)... Nov 23 04:54:17 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:54:17 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532585 on np0005532585.localdomain Nov 23 04:54:17 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:17 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:17 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 23 04:54:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:54:18 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:54:18 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:18 localhost ceph-mon[293353]: Reconfiguring mon.np0005532586 (monmap changed)... Nov 23 04:54:18 localhost ceph-mon[293353]: Reconfiguring daemon mon.np0005532586 on np0005532586.localdomain Nov 23 04:54:18 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:18 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:54:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:54:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:54:22 localhost podman[305980]: 2025-11-23 09:54:22.938534407 +0000 UTC m=+0.090648211 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64) Nov 23 04:54:22 localhost podman[305980]: 2025-11-23 09:54:22.951423451 +0000 UTC m=+0.103537305 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, name=ubi9-minimal, version=9.6, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350) Nov 23 04:54:22 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:54:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0. Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:54:27.203219) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31 Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891667203251, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1526, "num_deletes": 255, "total_data_size": 3802618, "memory_usage": 4023720, "flush_reason": "Manual Compaction"} Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891667222145, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 3362442, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20189, "largest_seqno": 21714, "table_properties": {"data_size": 3355375, "index_size": 3892, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 18742, "raw_average_key_size": 22, "raw_value_size": 3339894, "raw_average_value_size": 4004, "num_data_blocks": 170, "num_entries": 834, "num_filter_entries": 834, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891633, "oldest_key_time": 1763891633, "file_creation_time": 1763891667, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}} Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 18983 microseconds, and 7612 cpu microseconds. Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:54:27.222197) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 3362442 bytes OK Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:54:27.222219) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:54:27.224326) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:54:27.224346) EVENT_LOG_v1 {"time_micros": 1763891667224340, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:54:27.224365) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 3795161, prev total WAL file size 3795651, number of live WAL files 2. Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:54:27.225371) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373536' seq:72057594037927935, type:22 .. '6D6772737461740034303038' seq:0, type:0; will stop at (end) Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(3283KB)], [30(16MB)] Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891667225417, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 20587196, "oldest_snapshot_seqno": -1} Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 12209 keys, 18323962 bytes, temperature: kUnknown Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891667312771, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 18323962, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18254805, "index_size": 37568, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30533, "raw_key_size": 325798, "raw_average_key_size": 26, "raw_value_size": 18047362, "raw_average_value_size": 1478, "num_data_blocks": 1437, "num_entries": 12209, "num_filter_entries": 12209, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763891667, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}} Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:54:27.313078) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 18323962 bytes Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:54:27.314933) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.5 rd, 209.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 16.4 +0.0 blob) out(17.5 +0.0 blob), read-write-amplify(11.6) write-amplify(5.4) OK, records in: 12737, records dropped: 528 output_compression: NoCompression Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:54:27.314964) EVENT_LOG_v1 {"time_micros": 1763891667314950, "job": 16, "event": "compaction_finished", "compaction_time_micros": 87430, "compaction_time_cpu_micros": 51833, "output_level": 6, "num_output_files": 1, "total_output_size": 18323962, "num_input_records": 12737, "num_output_records": 12209, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891667315680, "job": 16, "event": "table_file_deletion", "file_number": 32} Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891667318003, "job": 16, "event": "table_file_deletion", "file_number": 30} Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:54:27.225307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:54:27.318257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:54:27.318265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:54:27.318268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:54:27.318271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:54:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:54:27.318275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:54:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:54:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:54:27 localhost podman[306000]: 2025-11-23 09:54:27.934170042 +0000 UTC m=+0.120535644 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute) Nov 23 04:54:27 localhost podman[306000]: 2025-11-23 09:54:27.974525915 +0000 UTC m=+0.160891477 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute) Nov 23 04:54:27 localhost systemd[1]: tmp-crun.ODJdzj.mount: Deactivated successfully. Nov 23 04:54:27 localhost podman[306001]: 2025-11-23 09:54:27.988619336 +0000 UTC m=+0.172463091 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:54:27 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:54:28 localhost podman[306001]: 2025-11-23 09:54:28.000307803 +0000 UTC m=+0.184151548 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:54:28 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:54:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:54:36 localhost openstack_network_exporter[241732]: ERROR 09:54:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:54:36 localhost openstack_network_exporter[241732]: ERROR 09:54:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:54:36 localhost openstack_network_exporter[241732]: ERROR 09:54:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:54:36 localhost openstack_network_exporter[241732]: ERROR 09:54:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:54:36 localhost openstack_network_exporter[241732]: Nov 23 04:54:36 localhost openstack_network_exporter[241732]: ERROR 09:54:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:54:36 localhost openstack_network_exporter[241732]: Nov 23 04:54:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:54:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:54:37 localhost podman[306042]: 2025-11-23 09:54:37.900552069 +0000 UTC m=+0.092166497 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2) Nov 23 04:54:37 localhost podman[306042]: 2025-11-23 09:54:37.936517819 +0000 UTC m=+0.128132277 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:54:37 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:54:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:54:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:54:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:54:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:54:44 localhost podman[306061]: 2025-11-23 09:54:44.90407015 +0000 UTC m=+0.085967768 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 04:54:44 localhost podman[306061]: 2025-11-23 09:54:44.944378452 +0000 UTC m=+0.126276020 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3) Nov 23 04:54:44 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:54:45 localhost podman[306062]: 2025-11-23 09:54:45.019660003 +0000 UTC m=+0.198155447 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 04:54:45 localhost podman[306062]: 2025-11-23 09:54:45.069151205 +0000 UTC m=+0.247646679 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:54:45 localhost systemd[1]: tmp-crun.2rBEd4.mount: Deactivated successfully. Nov 23 04:54:45 localhost podman[306063]: 2025-11-23 09:54:45.080892625 +0000 UTC m=+0.253401646 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 04:54:45 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:54:45 localhost podman[306063]: 2025-11-23 09:54:45.120581148 +0000 UTC m=+0.293090169 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 23 04:54:45 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:54:47 localhost podman[239764]: time="2025-11-23T09:54:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:54:47 localhost podman[239764]: @ - - [23/Nov/2025:09:54:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:54:47 localhost podman[239764]: @ - - [23/Nov/2025:09:54:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18225 "" "Go-http-client/1.1" Nov 23 04:54:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:54:47 localhost sshd[306123]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:54:52 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:54:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:54:53 localhost podman[306125]: 2025-11-23 09:54:53.897822813 +0000 UTC m=+0.083616177 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, release=1755695350, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-type=git, vendor=Red Hat, Inc.) Nov 23 04:54:53 localhost podman[306125]: 2025-11-23 09:54:53.911437219 +0000 UTC m=+0.097230583 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, architecture=x86_64, distribution-scope=public) Nov 23 04:54:53 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:54:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:54:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:54:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:54:58 localhost podman[306145]: 2025-11-23 09:54:58.891023669 +0000 UTC m=+0.078335412 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm) Nov 23 04:54:58 localhost podman[306145]: 2025-11-23 09:54:58.930461028 +0000 UTC m=+0.117772741 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute) Nov 23 04:54:58 localhost systemd[1]: tmp-crun.z2gY0j.mount: Deactivated successfully. Nov 23 04:54:58 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:54:58 localhost podman[306146]: 2025-11-23 09:54:58.953183879 +0000 UTC m=+0.138101659 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:54:58 localhost podman[306146]: 2025-11-23 09:54:58.960415429 +0000 UTC m=+0.145333239 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:54:58 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:55:01 localhost nova_compute[280939]: 2025-11-23 09:55:01.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:55:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0. Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:02.239758) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34 Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891702239810, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 592, "num_deletes": 256, "total_data_size": 368103, "memory_usage": 379560, "flush_reason": "Manual Compaction"} Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891702244849, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 353856, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21715, "largest_seqno": 22306, "table_properties": {"data_size": 350917, "index_size": 922, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6845, "raw_average_key_size": 18, "raw_value_size": 344937, "raw_average_value_size": 927, "num_data_blocks": 41, "num_entries": 372, "num_filter_entries": 372, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891667, "oldest_key_time": 1763891667, "file_creation_time": 1763891702, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 5139 microseconds, and 1802 cpu microseconds. Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:02.244896) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 353856 bytes OK Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:02.244916) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:02.247684) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:02.247706) EVENT_LOG_v1 {"time_micros": 1763891702247699, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:02.247724) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 364867, prev total WAL file size 365191, number of live WAL files 2. Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:02.248396) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373733' seq:72057594037927935, type:22 .. '6C6F676D0034303235' seq:0, type:0; will stop at (end) Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(345KB)], [33(17MB)] Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891702248435, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 18677818, "oldest_snapshot_seqno": -1} Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 12056 keys, 18579156 bytes, temperature: kUnknown Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891702328223, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 18579156, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18509899, "index_size": 38052, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30149, "raw_key_size": 323560, "raw_average_key_size": 26, "raw_value_size": 18303949, "raw_average_value_size": 1518, "num_data_blocks": 1455, "num_entries": 12056, "num_filter_entries": 12056, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763891702, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:02.328568) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 18579156 bytes Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:02.330360) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 233.8 rd, 232.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 17.5 +0.0 blob) out(17.7 +0.0 blob), read-write-amplify(105.3) write-amplify(52.5) OK, records in: 12581, records dropped: 525 output_compression: NoCompression Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:02.330389) EVENT_LOG_v1 {"time_micros": 1763891702330376, "job": 18, "event": "compaction_finished", "compaction_time_micros": 79884, "compaction_time_cpu_micros": 48753, "output_level": 6, "num_output_files": 1, "total_output_size": 18579156, "num_input_records": 12581, "num_output_records": 12056, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891702330616, "job": 18, "event": "table_file_deletion", "file_number": 35} Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891702333329, "job": 18, "event": "table_file_deletion", "file_number": 33} Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:02.248333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:02.333415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:02.333421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:02.333423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:02.333425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:02.333427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:03 localhost nova_compute[280939]: 2025-11-23 09:55:03.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:55:04 localhost nova_compute[280939]: 2025-11-23 09:55:04.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:55:04 localhost nova_compute[280939]: 2025-11-23 09:55:04.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:55:04 localhost nova_compute[280939]: 2025-11-23 09:55:04.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:55:04 localhost nova_compute[280939]: 2025-11-23 09:55:04.156 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:55:05 localhost nova_compute[280939]: 2025-11-23 09:55:05.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:55:05 localhost nova_compute[280939]: 2025-11-23 09:55:05.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:55:05 localhost nova_compute[280939]: 2025-11-23 09:55:05.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:55:06 localhost nova_compute[280939]: 2025-11-23 09:55:06.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:55:06 localhost openstack_network_exporter[241732]: ERROR 09:55:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:55:06 localhost openstack_network_exporter[241732]: ERROR 09:55:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:55:06 localhost openstack_network_exporter[241732]: ERROR 09:55:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:55:06 localhost openstack_network_exporter[241732]: ERROR 09:55:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:55:06 localhost openstack_network_exporter[241732]: Nov 23 04:55:06 localhost openstack_network_exporter[241732]: ERROR 09:55:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:55:06 localhost openstack_network_exporter[241732]: Nov 23 04:55:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:55:08 localhost nova_compute[280939]: 2025-11-23 09:55:08.128 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:55:08 localhost nova_compute[280939]: 2025-11-23 09:55:08.128 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:55:08 localhost nova_compute[280939]: 2025-11-23 09:55:08.148 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:55:08 localhost nova_compute[280939]: 2025-11-23 09:55:08.170 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:55:08 localhost nova_compute[280939]: 2025-11-23 09:55:08.170 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:55:08 localhost nova_compute[280939]: 2025-11-23 09:55:08.170 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:55:08 localhost nova_compute[280939]: 2025-11-23 09:55:08.171 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:55:08 localhost nova_compute[280939]: 2025-11-23 09:55:08.171 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:55:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:55:08 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2747986739' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:55:08 localhost nova_compute[280939]: 2025-11-23 09:55:08.610 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:55:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:55:08 localhost nova_compute[280939]: 2025-11-23 09:55:08.816 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:55:08 localhost nova_compute[280939]: 2025-11-23 09:55:08.818 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12330MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:55:08 localhost nova_compute[280939]: 2025-11-23 09:55:08.818 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:55:08 localhost nova_compute[280939]: 2025-11-23 09:55:08.819 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:55:08 localhost nova_compute[280939]: 2025-11-23 09:55:08.895 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:55:08 localhost nova_compute[280939]: 2025-11-23 09:55:08.895 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:55:08 localhost systemd[1]: tmp-crun.Z1Srjx.mount: Deactivated successfully. Nov 23 04:55:08 localhost podman[306210]: 2025-11-23 09:55:08.908325499 +0000 UTC m=+0.086779479 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:55:08 localhost podman[306210]: 2025-11-23 09:55:08.913803666 +0000 UTC m=+0.092257666 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent) Nov 23 04:55:08 localhost nova_compute[280939]: 2025-11-23 09:55:08.919 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:55:08 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:55:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:55:09 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2084422730' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:55:09 localhost nova_compute[280939]: 2025-11-23 09:55:09.354 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:55:09 localhost nova_compute[280939]: 2025-11-23 09:55:09.360 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:55:09 localhost nova_compute[280939]: 2025-11-23 09:55:09.380 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:55:09 localhost nova_compute[280939]: 2025-11-23 09:55:09.382 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:55:09 localhost nova_compute[280939]: 2025-11-23 09:55:09.382 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:55:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:55:09.737 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:55:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:55:09.738 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:55:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:55:09.738 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:55:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.576 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.577 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:55:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:55:13 localhost nova_compute[280939]: 2025-11-23 09:55:13.367 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:55:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:55:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:55:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:55:15 localhost systemd[1]: tmp-crun.vVLOmx.mount: Deactivated successfully. Nov 23 04:55:15 localhost podman[306250]: 2025-11-23 09:55:15.901060316 +0000 UTC m=+0.081272453 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Nov 23 04:55:15 localhost podman[306250]: 2025-11-23 09:55:15.916423623 +0000 UTC m=+0.096635750 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true) Nov 23 04:55:15 localhost podman[306252]: 2025-11-23 09:55:15.919081604 +0000 UTC m=+0.090526744 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3) Nov 23 04:55:15 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:55:15 localhost podman[306251]: 2025-11-23 09:55:15.969228879 +0000 UTC m=+0.144363541 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:55:15 localhost podman[306251]: 2025-11-23 09:55:15.976975884 +0000 UTC m=+0.152110606 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:55:15 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:55:16 localhost podman[306252]: 2025-11-23 09:55:16.049255622 +0000 UTC m=+0.220700742 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 04:55:16 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:55:17 localhost podman[239764]: time="2025-11-23T09:55:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:55:17 localhost podman[239764]: @ - - [23/Nov/2025:09:55:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:55:17 localhost podman[239764]: @ - - [23/Nov/2025:09:55:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18231 "" "Go-http-client/1.1" Nov 23 04:55:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:55:17 localhost sshd[306313]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:55:19 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 04:55:19 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:55:19 localhost ceph-mon[293353]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:55:19 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:55:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 04:55:20 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:55:21 localhost ceph-mon[293353]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' Nov 23 04:55:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mgr fail"} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e86 do_prune osdmap full prune enabled Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : Activating manager daemon np0005532584.naxwxy Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 e87: 6 total, 6 up, 6 in Nov 23 04:55:23 localhost ceph-mgr[286671]: mgr handle_mgr_map Activating! Nov 23 04:55:23 localhost ceph-mgr[286671]: mgr handle_mgr_map I am now activating Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e87: 6 total, 6 up, 6 in Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e40: np0005532584.naxwxy(active, starting, since 0.0272642s), standbys: np0005532585.gzafiw Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532584"} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "mon metadata", "id": "np0005532584"} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532585"} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "mon metadata", "id": "np0005532585"} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532586"} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "mon metadata", "id": "np0005532586"} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005532585.jcltnl"} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "mds metadata", "who": "mds.np0005532585.jcltnl"} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).mds e16 all = 0 Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005532584.aoxjmw"} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "mds metadata", "who": "mds.np0005532584.aoxjmw"} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).mds e16 all = 0 Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005532586.mfohsb"} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "mds metadata", "who": "mds.np0005532586.mfohsb"} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).mds e16 all = 0 Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005532584.naxwxy", "id": "np0005532584.naxwxy"} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "mgr metadata", "who": "np0005532584.naxwxy", "id": "np0005532584.naxwxy"} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005532585.gzafiw", "id": "np0005532585.gzafiw"} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "mgr metadata", "who": "np0005532585.gzafiw", "id": "np0005532585.gzafiw"} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd metadata", "id": 0} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd metadata", "id": 1} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd metadata", "id": 2} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd metadata", "id": 3} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd metadata", "id": 4} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd metadata", "id": 5} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mds metadata"} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "mds metadata"} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).mds e16 all = 1 Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd metadata"} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd metadata"} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon metadata"} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "mon metadata"} : dispatch Nov 23 04:55:23 localhost ceph-mgr[286671]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:55:23 localhost ceph-mgr[286671]: mgr load Constructed class from module: balancer Nov 23 04:55:23 localhost ceph-mgr[286671]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : Manager daemon np0005532584.naxwxy is now available Nov 23 04:55:23 localhost ceph-mgr[286671]: [balancer INFO root] Starting Nov 23 04:55:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_09:55:23 Nov 23 04:55:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 04:55:23 localhost ceph-mgr[286671]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Nov 23 04:55:23 localhost ceph-mgr[286671]: mgr load Constructed class from module: cephadm Nov 23 04:55:23 localhost ceph-mgr[286671]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:55:23 localhost ceph-mgr[286671]: mgr load Constructed class from module: crash Nov 23 04:55:23 localhost ceph-mgr[286671]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:55:23 localhost ceph-mgr[286671]: mgr load Constructed class from module: devicehealth Nov 23 04:55:23 localhost ceph-mgr[286671]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:55:23 localhost ceph-mgr[286671]: mgr load Constructed class from module: iostat Nov 23 04:55:23 localhost ceph-mgr[286671]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:55:23 localhost ceph-mgr[286671]: mgr load Constructed class from module: nfs Nov 23 04:55:23 localhost ceph-mgr[286671]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:55:23 localhost ceph-mgr[286671]: mgr load Constructed class from module: orchestrator Nov 23 04:55:23 localhost ceph-mgr[286671]: [devicehealth INFO root] Starting Nov 23 04:55:23 localhost ceph-mgr[286671]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:55:23 localhost ceph-mgr[286671]: mgr load Constructed class from module: pg_autoscaler Nov 23 04:55:23 localhost ceph-mgr[286671]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:55:23 localhost ceph-mgr[286671]: mgr load Constructed class from module: progress Nov 23 04:55:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 04:55:23 localhost systemd[1]: session-70.scope: Deactivated successfully. Nov 23 04:55:23 localhost systemd[1]: session-70.scope: Consumed 9.230s CPU time. Nov 23 04:55:23 localhost systemd-logind[760]: Session 70 logged out. Waiting for processes to exit. Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:55:23 localhost systemd-logind[760]: Removed session 70. Nov 23 04:55:23 localhost ceph-mgr[286671]: [progress INFO root] Loading... Nov 23 04:55:23 localhost ceph-mgr[286671]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Nov 23 04:55:23 localhost ceph-mgr[286671]: [progress INFO root] Loaded OSDMap, ready. Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] recovery thread starting Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] starting setup Nov 23 04:55:23 localhost ceph-mgr[286671]: mgr load Constructed class from module: rbd_support Nov 23 04:55:23 localhost ceph-mgr[286671]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:55:23 localhost ceph-mgr[286671]: mgr load Constructed class from module: restful Nov 23 04:55:23 localhost ceph-mgr[286671]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:55:23 localhost ceph-mgr[286671]: mgr load Constructed class from module: status Nov 23 04:55:23 localhost ceph-mgr[286671]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:55:23 localhost ceph-mgr[286671]: mgr load Constructed class from module: telemetry Nov 23 04:55:23 localhost ceph-mgr[286671]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532584.naxwxy/mirror_snapshot_schedule"} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532584.naxwxy/mirror_snapshot_schedule"} : dispatch Nov 23 04:55:23 localhost ceph-mgr[286671]: [restful INFO root] server_addr: :: server_port: 8003 Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 04:55:23 localhost ceph-mgr[286671]: [restful WARNING root] server not running: no certificate configured Nov 23 04:55:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 04:55:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 04:55:23 localhost ceph-mgr[286671]: mgr load Constructed class from module: volumes Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 04:55:23 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 04:55:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:55:23.345+0000 7f9cccb12640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:55:23.345+0000 7f9cccb12640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:55:23.345+0000 7f9cccb12640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:55:23.345+0000 7f9cccb12640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:55:23.345+0000 7f9cccb12640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 04:55:23 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:55:23.348+0000 7f9ccab0e640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:55:23.348+0000 7f9ccab0e640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:55:23.348+0000 7f9ccab0e640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:55:23.348+0000 7f9ccab0e640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T09:55:23.348+0000 7f9ccab0e640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] PerfHandler: starting Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_task_task: vms, start_after= Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_task_task: volumes, start_after= Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_task_task: images, start_after= Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_task_task: backups, start_after= Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TaskHandler: starting Nov 23 04:55:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532584.naxwxy/trash_purge_schedule"} v 0) Nov 23 04:55:23 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532584.naxwxy/trash_purge_schedule"} : dispatch Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Nov 23 04:55:23 localhost ceph-mgr[286671]: [rbd_support INFO root] setup complete Nov 23 04:55:23 localhost sshd[306539]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:55:23 localhost systemd-logind[760]: New session 71 of user ceph-admin. Nov 23 04:55:23 localhost systemd[1]: Started Session 71 of User ceph-admin. Nov 23 04:55:23 localhost ceph-mon[293353]: from='client.? 172.18.0.200:0/1136241170' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: Activating manager daemon np0005532584.naxwxy Nov 23 04:55:23 localhost ceph-mon[293353]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Nov 23 04:55:23 localhost ceph-mon[293353]: Manager daemon np0005532584.naxwxy is now available Nov 23 04:55:23 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532584.naxwxy/mirror_snapshot_schedule"} : dispatch Nov 23 04:55:23 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532584.naxwxy/trash_purge_schedule"} : dispatch Nov 23 04:55:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e41: np0005532584.naxwxy(active, since 1.05703s), standbys: np0005532585.gzafiw Nov 23 04:55:24 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:55:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:55:24 localhost podman[306621]: 2025-11-23 09:55:24.411334394 +0000 UTC m=+0.094898677 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, distribution-scope=public, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc.) Nov 23 04:55:24 localhost podman[306621]: 2025-11-23 09:55:24.428463795 +0000 UTC m=+0.112028038 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible) Nov 23 04:55:24 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:55:24 localhost systemd[1]: tmp-crun.Qp6gcP.mount: Deactivated successfully. Nov 23 04:55:24 localhost podman[306667]: 2025-11-23 09:55:24.702215818 +0000 UTC m=+0.104022813 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, name=rhceph, io.openshift.expose-services=, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 23 04:55:24 localhost podman[306667]: 2025-11-23 09:55:24.812691827 +0000 UTC m=+0.214498832 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , version=7, GIT_BRANCH=main, RELEASE=main, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, vcs-type=git, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.) Nov 23 04:55:25 localhost ceph-mgr[286671]: [cephadm INFO cherrypy.error] [23/Nov/2025:09:55:25] ENGINE Bus STARTING Nov 23 04:55:25 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : [23/Nov/2025:09:55:25] ENGINE Bus STARTING Nov 23 04:55:25 localhost ceph-mgr[286671]: [cephadm INFO cherrypy.error] [23/Nov/2025:09:55:25] ENGINE Serving on https://172.18.0.106:7150 Nov 23 04:55:25 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : [23/Nov/2025:09:55:25] ENGINE Serving on https://172.18.0.106:7150 Nov 23 04:55:25 localhost ceph-mgr[286671]: [cephadm INFO cherrypy.error] [23/Nov/2025:09:55:25] ENGINE Client ('172.18.0.106', 60482) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 23 04:55:25 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : [23/Nov/2025:09:55:25] ENGINE Client ('172.18.0.106', 60482) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 23 04:55:25 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Nov 23 04:55:25 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Nov 23 04:55:25 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : Cluster is now healthy Nov 23 04:55:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:55:25 localhost ceph-mgr[286671]: [cephadm INFO cherrypy.error] [23/Nov/2025:09:55:25] ENGINE Serving on http://172.18.0.106:8765 Nov 23 04:55:25 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : [23/Nov/2025:09:55:25] ENGINE Serving on http://172.18.0.106:8765 Nov 23 04:55:25 localhost ceph-mgr[286671]: [cephadm INFO cherrypy.error] [23/Nov/2025:09:55:25] ENGINE Bus STARTED Nov 23 04:55:25 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : [23/Nov/2025:09:55:25] ENGINE Bus STARTED Nov 23 04:55:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:55:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:55:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:55:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:55:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:25 localhost ceph-mgr[286671]: [devicehealth INFO root] Check health Nov 23 04:55:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:55:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:55:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:26 localhost ceph-mon[293353]: [23/Nov/2025:09:55:25] ENGINE Bus STARTING Nov 23 04:55:26 localhost ceph-mon[293353]: [23/Nov/2025:09:55:25] ENGINE Serving on https://172.18.0.106:7150 Nov 23 04:55:26 localhost ceph-mon[293353]: [23/Nov/2025:09:55:25] ENGINE Client ('172.18.0.106', 60482) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 23 04:55:26 localhost ceph-mon[293353]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Nov 23 04:55:26 localhost ceph-mon[293353]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Nov 23 04:55:26 localhost ceph-mon[293353]: Cluster is now healthy Nov 23 04:55:26 localhost ceph-mon[293353]: [23/Nov/2025:09:55:25] ENGINE Serving on http://172.18.0.106:8765 Nov 23 04:55:26 localhost ceph-mon[293353]: [23/Nov/2025:09:55:25] ENGINE Bus STARTED Nov 23 04:55:26 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:26 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:26 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:26 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:26 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:26 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:26 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e42: np0005532584.naxwxy(active, since 3s), standbys: np0005532585.gzafiw Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:55:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:55:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Nov 23 04:55:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:55:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Nov 23 04:55:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 23 04:55:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:55:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Nov 23 04:55:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 23 04:55:26 localhost ceph-mgr[286671]: [cephadm INFO root] Adjusting osd_memory_target on np0005532584.localdomain to 836.6M Nov 23 04:55:26 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005532584.localdomain to 836.6M Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Nov 23 04:55:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Nov 23 04:55:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Nov 23 04:55:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 23 04:55:26 localhost ceph-mgr[286671]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:55:26 localhost ceph-mgr[286671]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Nov 23 04:55:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 23 04:55:26 localhost ceph-mgr[286671]: [cephadm INFO root] Adjusting osd_memory_target on np0005532586.localdomain to 836.6M Nov 23 04:55:26 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005532586.localdomain to 836.6M Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Nov 23 04:55:26 localhost ceph-mgr[286671]: [cephadm INFO root] Adjusting osd_memory_target on np0005532585.localdomain to 836.6M Nov 23 04:55:26 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005532585.localdomain to 836.6M Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Nov 23 04:55:26 localhost ceph-mgr[286671]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Nov 23 04:55:26 localhost ceph-mgr[286671]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Nov 23 04:55:26 localhost ceph-mgr[286671]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:55:26 localhost ceph-mgr[286671]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 04:55:26 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 04:55:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 04:55:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:55:26 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:55:26 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:55:26 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:55:26 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:55:26 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:55:26 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:55:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:55:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0. Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:27.271535) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37 Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891727271618, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 713, "num_deletes": 258, "total_data_size": 1353855, "memory_usage": 1369488, "flush_reason": "Manual Compaction"} Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891727281479, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 1315966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22307, "largest_seqno": 23019, "table_properties": {"data_size": 1312190, "index_size": 1503, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8763, "raw_average_key_size": 19, "raw_value_size": 1304214, "raw_average_value_size": 2829, "num_data_blocks": 62, "num_entries": 461, "num_filter_entries": 461, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891702, "oldest_key_time": 1763891702, "file_creation_time": 1763891727, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}} Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 9987 microseconds, and 4436 cpu microseconds. Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:27.281529) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 1315966 bytes OK Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:27.281551) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:27.283449) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:27.283473) EVENT_LOG_v1 {"time_micros": 1763891727283464, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:27.283497) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1349974, prev total WAL file size 1349974, number of live WAL files 2. Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:27.284195) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031353237' seq:72057594037927935, type:22 .. '6B760031373835' seq:0, type:0; will stop at (end) Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(1285KB)], [36(17MB)] Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891727284252, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 19895122, "oldest_snapshot_seqno": -1} Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 11970 keys, 18720546 bytes, temperature: kUnknown Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891727374758, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 18720546, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18652493, "index_size": 37040, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29957, "raw_key_size": 323438, "raw_average_key_size": 27, "raw_value_size": 18448500, "raw_average_value_size": 1541, "num_data_blocks": 1396, "num_entries": 11970, "num_filter_entries": 11970, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763891727, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}} Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:27.375129) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 18720546 bytes Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:27.377087) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 219.6 rd, 206.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 17.7 +0.0 blob) out(17.9 +0.0 blob), read-write-amplify(29.3) write-amplify(14.2) OK, records in: 12517, records dropped: 547 output_compression: NoCompression Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:27.377122) EVENT_LOG_v1 {"time_micros": 1763891727377108, "job": 20, "event": "compaction_finished", "compaction_time_micros": 90613, "compaction_time_cpu_micros": 47829, "output_level": 6, "num_output_files": 1, "total_output_size": 18720546, "num_input_records": 12517, "num_output_records": 11970, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891727377452, "job": 20, "event": "table_file_deletion", "file_number": 38} Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891727380123, "job": 20, "event": "table_file_deletion", "file_number": 36} Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:27.284108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:27.380155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:27.380160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:27.380163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:27.380166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:27 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:27.380168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 23 04:55:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 23 04:55:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 23 04:55:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 23 04:55:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 23 04:55:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 23 04:55:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:55:27 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:55:27 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:55:27 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:55:27 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:55:27 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:55:27 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:55:28 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:55:28 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:55:28 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : Standby manager daemon np0005532586.thmvqb started Nov 23 04:55:28 localhost ceph-mgr[286671]: mgr.server handle_open ignoring open from mgr.np0005532586.thmvqb 172.18.0.108:0/3834014831; not ready for session (expect reconnect) Nov 23 04:55:28 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:55:28 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:55:28 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:55:28 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:55:28 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532584.localdomain to 836.6M Nov 23 04:55:28 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:55:28 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532586.localdomain to 836.6M Nov 23 04:55:28 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532585.localdomain to 836.6M Nov 23 04:55:28 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Nov 23 04:55:28 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 04:55:28 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf Nov 23 04:55:28 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf Nov 23 04:55:28 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf Nov 23 04:55:28 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:55:28 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:55:28 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf Nov 23 04:55:28 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:55:28 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:55:28 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:55:28 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:55:28 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e43: np0005532584.naxwxy(active, since 5s), standbys: np0005532585.gzafiw, np0005532586.thmvqb Nov 23 04:55:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005532586.thmvqb", "id": "np0005532586.thmvqb"} v 0) Nov 23 04:55:28 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "mgr metadata", "who": "np0005532586.thmvqb", "id": "np0005532586.thmvqb"} : dispatch Nov 23 04:55:28 localhost ceph-mgr[286671]: [cephadm INFO cephadm.serve] Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:55:28 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:55:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:55:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:55:29 localhost podman[307480]: 2025-11-23 09:55:29.111490584 +0000 UTC m=+0.092303478 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:55:29 localhost podman[307480]: 2025-11-23 09:55:29.12056304 +0000 UTC m=+0.101375914 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:55:29 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:55:29 localhost podman[307479]: 2025-11-23 09:55:29.174983055 +0000 UTC m=+0.155068107 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118) Nov 23 04:55:29 localhost podman[307479]: 2025-11-23 09:55:29.216376823 +0000 UTC m=+0.196461865 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 04:55:29 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:55:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:55:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:55:29 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:55:29 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:29 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:55:29 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:55:29 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 23 04:55:29 localhost ceph-mon[293353]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:55:29 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:29 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:55:29 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:55:29 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:55:29 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0. Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:29.679322) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40 Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891729679379, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 363, "num_deletes": 251, "total_data_size": 428352, "memory_usage": 436120, "flush_reason": "Manual Compaction"} Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891729683517, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 427045, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23020, "largest_seqno": 23382, "table_properties": {"data_size": 424623, "index_size": 533, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6885, "raw_average_key_size": 21, "raw_value_size": 419482, "raw_average_value_size": 1282, "num_data_blocks": 20, "num_entries": 327, "num_filter_entries": 327, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891727, "oldest_key_time": 1763891727, "file_creation_time": 1763891729, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}} Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 4236 microseconds, and 1635 cpu microseconds. Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:29.683563) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 427045 bytes OK Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:29.683582) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:29.685425) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:29.685445) EVENT_LOG_v1 {"time_micros": 1763891729685439, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:29.685466) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 425836, prev total WAL file size 442014, number of live WAL files 2. Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:29.687603) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end) Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(417KB)], [39(17MB)] Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891729687719, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 19147591, "oldest_snapshot_seqno": -1} Nov 23 04:55:29 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 04:55:29 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:29 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev a5808113-9ec1-4116-8db5-705025bd3bf7 (Updating node-proxy deployment (+3 -> 3)) Nov 23 04:55:29 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev a5808113-9ec1-4116-8db5-705025bd3bf7 (Updating node-proxy deployment (+3 -> 3)) Nov 23 04:55:29 localhost ceph-mgr[286671]: [progress INFO root] Completed event a5808113-9ec1-4116-8db5-705025bd3bf7 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 04:55:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 04:55:29 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 11778 keys, 16438862 bytes, temperature: kUnknown Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891729769311, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 16438862, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16374222, "index_size": 34075, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29509, "raw_key_size": 320117, "raw_average_key_size": 27, "raw_value_size": 16175632, "raw_average_value_size": 1373, "num_data_blocks": 1268, "num_entries": 11778, "num_filter_entries": 11778, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763891729, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}} Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:29.769615) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 16438862 bytes Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:29.771750) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.5 rd, 201.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 17.9 +0.0 blob) out(15.7 +0.0 blob), read-write-amplify(83.3) write-amplify(38.5) OK, records in: 12297, records dropped: 519 output_compression: NoCompression Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:29.771778) EVENT_LOG_v1 {"time_micros": 1763891729771766, "job": 22, "event": "compaction_finished", "compaction_time_micros": 81665, "compaction_time_cpu_micros": 46578, "output_level": 6, "num_output_files": 1, "total_output_size": 16438862, "num_input_records": 12297, "num_output_records": 11778, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891729772011, "job": 22, "event": "table_file_deletion", "file_number": 41} Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891729774475, "job": 22, "event": "table_file_deletion", "file_number": 39} Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:29.687551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:29.774508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:29.774514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:29.774517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:29.774520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:29 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:55:29.774523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:55:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 04:55:29 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 04:55:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 04:55:29 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:55:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 04:55:29 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:29 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev 0b0c5cd7-e325-4c1e-90c4-61f4cdf71739 (Updating node-proxy deployment (+3 -> 3)) Nov 23 04:55:29 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev 0b0c5cd7-e325-4c1e-90c4-61f4cdf71739 (Updating node-proxy deployment (+3 -> 3)) Nov 23 04:55:29 localhost ceph-mgr[286671]: [progress INFO root] Completed event 0b0c5cd7-e325-4c1e-90c4-61f4cdf71739 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 04:55:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 04:55:29 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 04:55:30 localhost ceph-mon[293353]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:55:30 localhost ceph-mon[293353]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring Nov 23 04:55:30 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:30 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:30 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:30 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:30 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:30 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:55:30 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s Nov 23 04:55:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:55:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s Nov 23 04:55:33 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 04:55:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 04:55:33 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:34 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:55:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Nov 23 04:55:36 localhost openstack_network_exporter[241732]: ERROR 09:55:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:55:36 localhost openstack_network_exporter[241732]: ERROR 09:55:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:55:36 localhost openstack_network_exporter[241732]: ERROR 09:55:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:55:36 localhost openstack_network_exporter[241732]: ERROR 09:55:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:55:36 localhost openstack_network_exporter[241732]: Nov 23 04:55:36 localhost openstack_network_exporter[241732]: ERROR 09:55:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:55:36 localhost openstack_network_exporter[241732]: Nov 23 04:55:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Nov 23 04:55:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:55:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Nov 23 04:55:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:55:39 localhost podman[307663]: 2025-11-23 09:55:39.900642971 +0000 UTC m=+0.079849328 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 04:55:39 localhost podman[307663]: 2025-11-23 09:55:39.931581301 +0000 UTC m=+0.110787608 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:55:39 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:55:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Nov 23 04:55:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:55:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:55:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:55:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:55:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:55:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:55:46 localhost podman[307683]: 2025-11-23 09:55:46.888892792 +0000 UTC m=+0.069286518 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 23 04:55:46 localhost systemd[1]: tmp-crun.noGjZQ.mount: Deactivated successfully. Nov 23 04:55:46 localhost podman[307683]: 2025-11-23 09:55:46.992298246 +0000 UTC m=+0.172691942 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller) Nov 23 04:55:47 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:55:47 localhost podman[307681]: 2025-11-23 09:55:46.969160702 +0000 UTC m=+0.153660102 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Nov 23 04:55:47 localhost podman[307682]: 2025-11-23 09:55:46.998608298 +0000 UTC m=+0.180112657 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:55:47 localhost podman[239764]: time="2025-11-23T09:55:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:55:47 localhost podman[307681]: 2025-11-23 09:55:47.103093715 +0000 UTC m=+0.287593105 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Nov 23 04:55:47 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:55:47 localhost podman[307682]: 2025-11-23 09:55:47.132905281 +0000 UTC m=+0.314409650 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:55:47 localhost podman[239764]: @ - - [23/Nov/2025:09:55:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:55:47 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:55:47 localhost podman[239764]: @ - - [23/Nov/2025:09:55:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18226 "" "Go-http-client/1.1" Nov 23 04:55:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:55:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:55:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:55:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:55:52 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:55:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:55:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:55:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:55:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:55:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:55:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:55:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:55:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:55:54 localhost podman[307746]: 2025-11-23 09:55:54.887951276 +0000 UTC m=+0.076326641 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, release=1755695350, version=9.6, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter) Nov 23 04:55:54 localhost podman[307746]: 2025-11-23 09:55:54.929397167 +0000 UTC m=+0.117772532 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2025-08-20T13:12:41) Nov 23 04:55:54 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:55:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:55:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:55:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:55:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:55:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:55:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:55:59 localhost podman[307766]: 2025-11-23 09:55:59.908097545 +0000 UTC m=+0.084207541 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:55:59 localhost podman[307766]: 2025-11-23 09:55:59.951299579 +0000 UTC m=+0.127409515 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:55:59 localhost podman[307767]: 2025-11-23 09:55:59.959109006 +0000 UTC m=+0.131343214 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:55:59 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:55:59 localhost podman[307767]: 2025-11-23 09:55:59.970351707 +0000 UTC m=+0.142585965 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:55:59 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:56:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:56:03 localhost nova_compute[280939]: 2025-11-23 09:56:03.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:56:03 localhost nova_compute[280939]: 2025-11-23 09:56:03.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:56:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:05 localhost nova_compute[280939]: 2025-11-23 09:56:05.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:56:05 localhost nova_compute[280939]: 2025-11-23 09:56:05.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:56:05 localhost nova_compute[280939]: 2025-11-23 09:56:05.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:56:05 localhost nova_compute[280939]: 2025-11-23 09:56:05.147 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:56:05 localhost nova_compute[280939]: 2025-11-23 09:56:05.148 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:56:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:06 localhost openstack_network_exporter[241732]: ERROR 09:56:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:56:06 localhost openstack_network_exporter[241732]: ERROR 09:56:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:56:06 localhost openstack_network_exporter[241732]: ERROR 09:56:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:56:06 localhost openstack_network_exporter[241732]: ERROR 09:56:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:56:06 localhost openstack_network_exporter[241732]: Nov 23 04:56:06 localhost openstack_network_exporter[241732]: ERROR 09:56:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:56:06 localhost openstack_network_exporter[241732]: Nov 23 04:56:07 localhost nova_compute[280939]: 2025-11-23 09:56:07.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:56:07 localhost nova_compute[280939]: 2025-11-23 09:56:07.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:56:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:56:08 localhost nova_compute[280939]: 2025-11-23 09:56:08.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:56:09 localhost nova_compute[280939]: 2025-11-23 09:56:09.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:56:09 localhost nova_compute[280939]: 2025-11-23 09:56:09.147 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:56:09 localhost nova_compute[280939]: 2025-11-23 09:56:09.148 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:56:09 localhost nova_compute[280939]: 2025-11-23 09:56:09.148 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:56:09 localhost nova_compute[280939]: 2025-11-23 09:56:09.148 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:56:09 localhost nova_compute[280939]: 2025-11-23 09:56:09.149 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:56:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:56:09 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1932412365' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:56:09 localhost nova_compute[280939]: 2025-11-23 09:56:09.561 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:56:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:09.738 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:56:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:09.739 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:56:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:09.739 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:56:09 localhost nova_compute[280939]: 2025-11-23 09:56:09.758 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:56:09 localhost nova_compute[280939]: 2025-11-23 09:56:09.760 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=12290MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:56:09 localhost nova_compute[280939]: 2025-11-23 09:56:09.760 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:56:09 localhost nova_compute[280939]: 2025-11-23 09:56:09.761 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:56:09 localhost nova_compute[280939]: 2025-11-23 09:56:09.842 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:56:09 localhost nova_compute[280939]: 2025-11-23 09:56:09.842 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:56:09 localhost nova_compute[280939]: 2025-11-23 09:56:09.872 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:56:10 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:56:10 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4143276001' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:56:10 localhost nova_compute[280939]: 2025-11-23 09:56:10.316 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:56:10 localhost nova_compute[280939]: 2025-11-23 09:56:10.322 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:56:10 localhost nova_compute[280939]: 2025-11-23 09:56:10.345 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:56:10 localhost nova_compute[280939]: 2025-11-23 09:56:10.347 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:56:10 localhost nova_compute[280939]: 2025-11-23 09:56:10.348 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:56:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:56:10 localhost podman[307851]: 2025-11-23 09:56:10.885253639 +0000 UTC m=+0.077055214 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:56:10 localhost podman[307851]: 2025-11-23 09:56:10.892277863 +0000 UTC m=+0.084079478 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 04:56:10 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:56:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:11 localhost nova_compute[280939]: 2025-11-23 09:56:11.344 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:56:12 localhost nova_compute[280939]: 2025-11-23 09:56:12.131 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:56:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:56:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:17 localhost podman[239764]: time="2025-11-23T09:56:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:56:17 localhost podman[239764]: @ - - [23/Nov/2025:09:56:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:56:17 localhost podman[239764]: @ - - [23/Nov/2025:09:56:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18221 "" "Go-http-client/1.1" Nov 23 04:56:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:56:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:56:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:56:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:56:17 localhost systemd[1]: tmp-crun.JMGKuo.mount: Deactivated successfully. Nov 23 04:56:17 localhost podman[307869]: 2025-11-23 09:56:17.901076018 +0000 UTC m=+0.080313803 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3) Nov 23 04:56:17 localhost podman[307869]: 2025-11-23 09:56:17.914832716 +0000 UTC m=+0.094070491 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 04:56:17 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:56:17 localhost podman[307871]: 2025-11-23 09:56:17.963152605 +0000 UTC m=+0.133195280 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:56:18 localhost podman[307871]: 2025-11-23 09:56:18.002340226 +0000 UTC m=+0.172382931 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller) Nov 23 04:56:18 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:56:18 localhost podman[307870]: 2025-11-23 09:56:18.00477585 +0000 UTC m=+0.179826117 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 04:56:18 localhost podman[307870]: 2025-11-23 09:56:18.088364052 +0000 UTC m=+0.263414299 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:56:18 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:56:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:56:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_09:56:23 Nov 23 04:56:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 04:56:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 04:56:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['images', '.mgr', 'manila_data', 'vms', 'volumes', 'backups', 'manila_metadata'] Nov 23 04:56:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 04:56:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 04:56:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:56:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 04:56:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:56:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Nov 23 04:56:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:56:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 04:56:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:56:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Nov 23 04:56:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:56:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 04:56:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:56:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 04:56:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:56:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019596681323283084 quantized to 16 (current 16) Nov 23 04:56:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:56:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:56:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:56:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:56:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:56:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:56:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 04:56:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 04:56:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 04:56:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 04:56:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 04:56:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 04:56:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 04:56:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 04:56:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 04:56:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 04:56:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:56:25 localhost podman[307934]: 2025-11-23 09:56:25.887647342 +0000 UTC m=+0.078033463 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter) Nov 23 04:56:25 localhost podman[307934]: 2025-11-23 09:56:25.92340291 +0000 UTC m=+0.113789031 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-type=git, container_name=openstack_network_exporter, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6) Nov 23 04:56:25 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:56:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:56:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:56:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:56:30 localhost podman[307954]: 2025-11-23 09:56:30.317601096 +0000 UTC m=+0.090552074 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:56:30 localhost podman[307954]: 2025-11-23 09:56:30.330455677 +0000 UTC m=+0.103406615 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:56:30 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:56:30 localhost podman[307955]: 2025-11-23 09:56:30.415705559 +0000 UTC m=+0.185874912 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:56:30 localhost podman[307955]: 2025-11-23 09:56:30.454430047 +0000 UTC m=+0.224599390 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:56:30 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:56:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 04:56:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:56:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 04:56:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:56:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 04:56:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:56:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 04:56:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:56:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 04:56:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:56:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 04:56:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:56:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 04:56:31 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 04:56:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 04:56:31 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:56:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 04:56:31 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:56:31 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev b773a8bc-0fd8-41ad-a840-662aff7b0482 (Updating node-proxy deployment (+3 -> 3)) Nov 23 04:56:31 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev b773a8bc-0fd8-41ad-a840-662aff7b0482 (Updating node-proxy deployment (+3 -> 3)) Nov 23 04:56:31 localhost ceph-mgr[286671]: [progress INFO root] Completed event b773a8bc-0fd8-41ad-a840-662aff7b0482 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 04:56:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 04:56:31 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 04:56:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:56:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:56:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:56:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:56:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:56:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:56:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:56:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:56:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:56:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:33 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 04:56:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 04:56:33 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:56:34 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:56:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:36 localhost openstack_network_exporter[241732]: ERROR 09:56:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:56:36 localhost openstack_network_exporter[241732]: ERROR 09:56:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:56:36 localhost openstack_network_exporter[241732]: Nov 23 04:56:36 localhost openstack_network_exporter[241732]: ERROR 09:56:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:56:36 localhost openstack_network_exporter[241732]: ERROR 09:56:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:56:36 localhost openstack_network_exporter[241732]: ERROR 09:56:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:56:36 localhost openstack_network_exporter[241732]: Nov 23 04:56:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:56:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:56:41 localhost podman[308137]: 2025-11-23 09:56:41.899325723 +0000 UTC m=+0.083625524 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 23 04:56:41 localhost podman[308137]: 2025-11-23 09:56:41.929546241 +0000 UTC m=+0.113846082 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 23 04:56:41 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:56:42 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:42.106 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:56:42 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:42.107 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 04:56:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:56:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:44 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:44.401 262301 INFO oslo.privsep.daemon [None req-52b588b7-88de-408c-b6b2-e3172ba66264 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp_8qjwed5/privsep.sock']#033[00m Nov 23 04:56:44 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:44.994 262301 INFO oslo.privsep.daemon [None req-52b588b7-88de-408c-b6b2-e3172ba66264 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Nov 23 04:56:44 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:44.890 308160 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Nov 23 04:56:44 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:44.894 308160 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Nov 23 04:56:44 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:44.897 308160 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Nov 23 04:56:44 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:44.897 308160 INFO oslo.privsep.daemon [-] privsep daemon running as pid 308160#033[00m Nov 23 04:56:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:45 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:45.490 262301 INFO oslo.privsep.daemon [None req-52b588b7-88de-408c-b6b2-e3172ba66264 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpam4rruwt/privsep.sock']#033[00m Nov 23 04:56:46 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:46.140 262301 INFO oslo.privsep.daemon [None req-52b588b7-88de-408c-b6b2-e3172ba66264 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Nov 23 04:56:46 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:46.040 308169 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Nov 23 04:56:46 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:46.045 308169 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Nov 23 04:56:46 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:46.049 308169 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Nov 23 04:56:46 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:46.049 308169 INFO oslo.privsep.daemon [-] privsep daemon running as pid 308169#033[00m Nov 23 04:56:47 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:47.037 262301 INFO oslo.privsep.daemon [None req-52b588b7-88de-408c-b6b2-e3172ba66264 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpbyxitkyd/privsep.sock']#033[00m Nov 23 04:56:47 localhost podman[239764]: time="2025-11-23T09:56:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:56:47 localhost podman[239764]: @ - - [23/Nov/2025:09:56:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152673 "" "Go-http-client/1.1" Nov 23 04:56:47 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:47.109 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:56:47 localhost podman[239764]: @ - - [23/Nov/2025:09:56:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18236 "" "Go-http-client/1.1" Nov 23 04:56:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 23 04:56:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:56:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e87 do_prune osdmap full prune enabled Nov 23 04:56:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e88 e88: 6 total, 6 up, 6 in Nov 23 04:56:47 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e88: 6 total, 6 up, 6 in Nov 23 04:56:47 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:47.600 262301 INFO oslo.privsep.daemon [None req-52b588b7-88de-408c-b6b2-e3172ba66264 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Nov 23 04:56:47 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:47.492 308181 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Nov 23 04:56:47 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:47.497 308181 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Nov 23 04:56:47 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:47.500 308181 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Nov 23 04:56:47 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:47.501 308181 INFO oslo.privsep.daemon [-] privsep daemon running as pid 308181#033[00m Nov 23 04:56:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:56:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:56:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:56:48 localhost podman[308191]: 2025-11-23 09:56:48.895583567 +0000 UTC m=+0.076012592 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 23 04:56:48 localhost podman[308191]: 2025-11-23 09:56:48.909546531 +0000 UTC m=+0.089975536 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:56:48 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:56:49 localhost podman[308193]: 2025-11-23 09:56:49.001673622 +0000 UTC m=+0.178849989 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Nov 23 04:56:49 localhost podman[308192]: 2025-11-23 09:56:49.051860688 +0000 UTC m=+0.228858179 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:56:49 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:49.057 262301 INFO neutron.agent.linux.ip_lib [None req-52b588b7-88de-408c-b6b2-e3172ba66264 - - - - - -] Device tap66b03f7f-a1 cannot be used as it has no MAC address#033[00m Nov 23 04:56:49 localhost podman[308193]: 2025-11-23 09:56:49.078332413 +0000 UTC m=+0.255508760 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:56:49 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:56:49 localhost kernel: device tap66b03f7f-a1 entered promiscuous mode Nov 23 04:56:49 localhost ovn_controller[153771]: 2025-11-23T09:56:49Z|00025|binding|INFO|Claiming lport 66b03f7f-a1c8-402a-90fa-c809e4abff5b for this chassis. Nov 23 04:56:49 localhost NetworkManager[5966]: [1763891809.1133] manager: (tap66b03f7f-a1): new Generic device (/org/freedesktop/NetworkManager/Devices/13) Nov 23 04:56:49 localhost ovn_controller[153771]: 2025-11-23T09:56:49Z|00026|binding|INFO|66b03f7f-a1c8-402a-90fa-c809e4abff5b: Claiming unknown Nov 23 04:56:49 localhost systemd-udevd[308263]: Network interface NamePolicy= disabled on kernel command line. Nov 23 04:56:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:49.124 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.199.3/24', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-b5a85db0-5f88-4429-ad47-37b7f195cac4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5a85db0-5f88-4429-ad47-37b7f195cac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8fb5c988e2b4995853c12d0c4c7bb29', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79762df0-8622-4b35-b36f-1536e0cb7c2c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=66b03f7f-a1c8-402a-90fa-c809e4abff5b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:56:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:49.126 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 66b03f7f-a1c8-402a-90fa-c809e4abff5b in datapath b5a85db0-5f88-4429-ad47-37b7f195cac4 bound to our chassis#033[00m Nov 23 04:56:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:49.129 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port 18d88e3d-4e37-4d33-ac27-dce16f0eea27 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 04:56:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:49.129 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5a85db0-5f88-4429-ad47-37b7f195cac4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 04:56:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:49.132 159415 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmptdmavuvs/privsep.sock']#033[00m Nov 23 04:56:49 localhost podman[308192]: 2025-11-23 09:56:49.134325345 +0000 UTC m=+0.311322786 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:56:49 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:56:49 localhost ovn_controller[153771]: 2025-11-23T09:56:49Z|00027|binding|INFO|Setting lport 66b03f7f-a1c8-402a-90fa-c809e4abff5b ovn-installed in OVS Nov 23 04:56:49 localhost ovn_controller[153771]: 2025-11-23T09:56:49Z|00028|binding|INFO|Setting lport 66b03f7f-a1c8-402a-90fa-c809e4abff5b up in Southbound Nov 23 04:56:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 129 MiB data, 627 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 2.5 MiB/s wr, 27 op/s Nov 23 04:56:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e88 do_prune osdmap full prune enabled Nov 23 04:56:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e89 e89: 6 total, 6 up, 6 in Nov 23 04:56:49 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in Nov 23 04:56:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:49.791 159415 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Nov 23 04:56:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:49.792 159415 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmptdmavuvs/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Nov 23 04:56:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:49.686 308301 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Nov 23 04:56:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:49.690 308301 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Nov 23 04:56:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:49.694 308301 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Nov 23 04:56:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:49.694 308301 INFO oslo.privsep.daemon [-] privsep daemon running as pid 308301#033[00m Nov 23 04:56:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:49.795 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[a50c7af9-0dc6-4426-bfe3-a00980007d76]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:56:50 localhost podman[308326]: Nov 23 04:56:50 localhost podman[308326]: 2025-11-23 09:56:50.046262754 +0000 UTC m=+0.086940525 container create b909abaca854fa319a1ef2da45a64e75313723d7d22eb50352068dc63f5e7975 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b5a85db0-5f88-4429-ad47-37b7f195cac4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3) Nov 23 04:56:50 localhost systemd[1]: Started libpod-conmon-b909abaca854fa319a1ef2da45a64e75313723d7d22eb50352068dc63f5e7975.scope. Nov 23 04:56:50 localhost podman[308326]: 2025-11-23 09:56:50.002214874 +0000 UTC m=+0.042892685 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 04:56:50 localhost systemd[1]: Started libcrun container. Nov 23 04:56:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c418bfd060ede34f184f24fa78ceb22efbf522fc4f50e56bfc085dbfb5e542a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 04:56:50 localhost podman[308326]: 2025-11-23 09:56:50.124641187 +0000 UTC m=+0.165318968 container init b909abaca854fa319a1ef2da45a64e75313723d7d22eb50352068dc63f5e7975 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b5a85db0-5f88-4429-ad47-37b7f195cac4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:56:50 localhost podman[308326]: 2025-11-23 09:56:50.134333891 +0000 UTC m=+0.175011662 container start b909abaca854fa319a1ef2da45a64e75313723d7d22eb50352068dc63f5e7975 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b5a85db0-5f88-4429-ad47-37b7f195cac4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true) Nov 23 04:56:50 localhost dnsmasq[308345]: started, version 2.85 cachesize 150 Nov 23 04:56:50 localhost dnsmasq[308345]: DNS service limited to local subnets Nov 23 04:56:50 localhost dnsmasq[308345]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 04:56:50 localhost dnsmasq[308345]: warning: no upstream servers configured Nov 23 04:56:50 localhost dnsmasq-dhcp[308345]: DHCP, static leases only on 192.168.199.0, lease time 1d Nov 23 04:56:50 localhost dnsmasq[308345]: read /var/lib/neutron/dhcp/b5a85db0-5f88-4429-ad47-37b7f195cac4/addn_hosts - 0 addresses Nov 23 04:56:50 localhost dnsmasq-dhcp[308345]: read /var/lib/neutron/dhcp/b5a85db0-5f88-4429-ad47-37b7f195cac4/host Nov 23 04:56:50 localhost dnsmasq-dhcp[308345]: read /var/lib/neutron/dhcp/b5a85db0-5f88-4429-ad47-37b7f195cac4/opts Nov 23 04:56:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:50.215 308301 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:56:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:50.215 308301 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:56:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:50.215 308301 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:56:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:56:50.308 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9f39ea-4320-4a57-88d8-d50375289a4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:56:50 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:56:50.622 262301 INFO neutron.agent.dhcp.agent [None req-68f3a399-2325-4d86-9582-f85ac85945d7 - - - - - -] DHCP configuration for ports {'4de3d25b-ed93-472e-b3c5-4959e036ba57'} is completed#033[00m Nov 23 04:56:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v49: 177 pgs: 177 active+clean; 145 MiB data, 660 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 41 op/s Nov 23 04:56:52 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:56:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v50: 177 pgs: 177 active+clean; 145 MiB data, 660 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 41 op/s Nov 23 04:56:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:56:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:56:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:56:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Nov 23 04:56:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 23 04:56:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:56:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Nov 23 04:56:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 23 04:56:54 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e44: np0005532584.naxwxy(active, since 91s), standbys: np0005532585.gzafiw, np0005532586.thmvqb Nov 23 04:56:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v51: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 48 op/s Nov 23 04:56:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:56:56 localhost systemd[1]: tmp-crun.85W13M.mount: Deactivated successfully. Nov 23 04:56:56 localhost podman[308346]: 2025-11-23 09:56:56.89854102 +0000 UTC m=+0.082830350 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, container_name=openstack_network_exporter) Nov 23 04:56:56 localhost podman[308346]: 2025-11-23 09:56:56.912391011 +0000 UTC m=+0.096680341 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 04:56:56 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:56:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v52: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 4.2 MiB/s wr, 39 op/s Nov 23 04:56:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:56:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v53: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 5.7 KiB/s rd, 1.6 MiB/s wr, 10 op/s Nov 23 04:57:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:57:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:57:00 localhost podman[308367]: 2025-11-23 09:57:00.905859904 +0000 UTC m=+0.090257205 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118) Nov 23 04:57:00 localhost podman[308368]: 2025-11-23 09:57:00.946952664 +0000 UTC m=+0.128977473 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 04:57:00 localhost podman[308367]: 2025-11-23 09:57:00.970687315 +0000 UTC m=+0.155084646 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:57:00 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:57:01 localhost podman[308368]: 2025-11-23 09:57:01.027467492 +0000 UTC m=+0.209492301 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:57:01 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:57:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v54: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 2.8 KiB/s rd, 698 B/s wr, 4 op/s Nov 23 04:57:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:57:03 localhost nova_compute[280939]: 2025-11-23 09:57:03.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:57:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v55: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s rd, 682 B/s wr, 4 op/s Nov 23 04:57:03 localhost sshd[308407]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:57:05 localhost nova_compute[280939]: 2025-11-23 09:57:05.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:57:05 localhost nova_compute[280939]: 2025-11-23 09:57:05.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:57:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v56: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s rd, 682 B/s wr, 4 op/s Nov 23 04:57:06 localhost openstack_network_exporter[241732]: ERROR 09:57:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:57:06 localhost openstack_network_exporter[241732]: ERROR 09:57:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:57:06 localhost openstack_network_exporter[241732]: ERROR 09:57:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:57:06 localhost openstack_network_exporter[241732]: ERROR 09:57:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:57:06 localhost openstack_network_exporter[241732]: Nov 23 04:57:06 localhost openstack_network_exporter[241732]: ERROR 09:57:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:57:06 localhost openstack_network_exporter[241732]: Nov 23 04:57:07 localhost nova_compute[280939]: 2025-11-23 09:57:07.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:57:07 localhost nova_compute[280939]: 2025-11-23 09:57:07.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:57:07 localhost nova_compute[280939]: 2025-11-23 09:57:07.134 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:57:07 localhost nova_compute[280939]: 2025-11-23 09:57:07.149 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:57:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v57: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:57:09 localhost nova_compute[280939]: 2025-11-23 09:57:09.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:57:09 localhost nova_compute[280939]: 2025-11-23 09:57:09.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:57:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v58: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:57:09.739 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:57:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:57:09.739 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:57:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:57:09.740 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:57:10 localhost nova_compute[280939]: 2025-11-23 09:57:10.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:57:11 localhost nova_compute[280939]: 2025-11-23 09:57:11.128 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:57:11 localhost nova_compute[280939]: 2025-11-23 09:57:11.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:57:11 localhost nova_compute[280939]: 2025-11-23 09:57:11.152 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:57:11 localhost nova_compute[280939]: 2025-11-23 09:57:11.153 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:57:11 localhost nova_compute[280939]: 2025-11-23 09:57:11.153 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:57:11 localhost nova_compute[280939]: 2025-11-23 09:57:11.153 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:57:11 localhost nova_compute[280939]: 2025-11-23 09:57:11.154 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:57:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v59: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:11 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:57:11 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/48797289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:57:11 localhost nova_compute[280939]: 2025-11-23 09:57:11.692 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:57:11 localhost nova_compute[280939]: 2025-11-23 09:57:11.848 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:57:11 localhost nova_compute[280939]: 2025-11-23 09:57:11.849 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11929MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:57:11 localhost nova_compute[280939]: 2025-11-23 09:57:11.849 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:57:11 localhost nova_compute[280939]: 2025-11-23 09:57:11.849 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:57:11 localhost nova_compute[280939]: 2025-11-23 09:57:11.913 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:57:11 localhost nova_compute[280939]: 2025-11-23 09:57:11.914 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:57:11 localhost nova_compute[280939]: 2025-11-23 09:57:11.950 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:57:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:57:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:57:12 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3901111112' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:57:12 localhost nova_compute[280939]: 2025-11-23 09:57:12.361 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:57:12 localhost nova_compute[280939]: 2025-11-23 09:57:12.367 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:57:12 localhost nova_compute[280939]: 2025-11-23 09:57:12.383 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:57:12 localhost nova_compute[280939]: 2025-11-23 09:57:12.386 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:57:12 localhost nova_compute[280939]: 2025-11-23 09:57:12.386 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.585 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.585 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.585 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.585 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.586 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:57:12.586 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:57:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:57:12 localhost podman[308452]: 2025-11-23 09:57:12.902906463 +0000 UTC m=+0.081745430 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:57:12 localhost podman[308452]: 2025-11-23 09:57:12.9324613 +0000 UTC m=+0.111300267 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118) Nov 23 04:57:12 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:57:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v60: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:13 localhost nova_compute[280939]: 2025-11-23 09:57:13.382 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:57:14 localhost nova_compute[280939]: 2025-11-23 09:57:14.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:57:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v61: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:16 localhost nova_compute[280939]: 2025-11-23 09:57:16.150 280943 DEBUG oslo_concurrency.processutils [None req-c59afccb-0433-451f-bf4e-2468817336dc 472d3d820207444ab349b892f7b3ca99 3ea2e7c182ae47f6964737a8d585a4e8 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:57:16 localhost nova_compute[280939]: 2025-11-23 09:57:16.167 280943 DEBUG oslo_concurrency.processutils [None req-c59afccb-0433-451f-bf4e-2468817336dc 472d3d820207444ab349b892f7b3ca99 3ea2e7c182ae47f6964737a8d585a4e8 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:57:17 localhost podman[239764]: time="2025-11-23T09:57:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:57:17 localhost podman[239764]: @ - - [23/Nov/2025:09:57:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 04:57:17 localhost podman[239764]: @ - - [23/Nov/2025:09:57:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18702 "" "Go-http-client/1.1" Nov 23 04:57:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v62: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:57:19 localhost ovn_controller[153771]: 2025-11-23T09:57:19Z|00029|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Nov 23 04:57:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v63: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:57:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:57:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:57:19 localhost podman[308472]: 2025-11-23 09:57:19.898442442 +0000 UTC m=+0.085711523 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:57:19 localhost podman[308472]: 2025-11-23 09:57:19.915425094 +0000 UTC m=+0.102694215 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd) Nov 23 04:57:19 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:57:20 localhost podman[308473]: 2025-11-23 09:57:20.00193814 +0000 UTC m=+0.185657191 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:57:20 localhost podman[308473]: 2025-11-23 09:57:20.015368303 +0000 UTC m=+0.199087344 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:57:20 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:57:20 localhost podman[308474]: 2025-11-23 09:57:20.103439476 +0000 UTC m=+0.283930848 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 23 04:57:20 localhost podman[308474]: 2025-11-23 09:57:20.179402058 +0000 UTC m=+0.359893450 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 23 04:57:20 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:57:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v64: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:57:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_09:57:23 Nov 23 04:57:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 04:57:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 04:57:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['manila_metadata', 'backups', 'images', 'manila_data', 'volumes', '.mgr', 'vms'] Nov 23 04:57:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 04:57:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v65: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 04:57:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:57:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 04:57:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:57:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Nov 23 04:57:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:57:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 04:57:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:57:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 23 04:57:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:57:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 04:57:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:57:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 04:57:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:57:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Nov 23 04:57:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:57:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:57:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:57:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:57:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:57:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:57:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 04:57:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 04:57:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 04:57:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 04:57:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 04:57:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 04:57:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 04:57:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 04:57:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 04:57:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 04:57:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v66: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v67: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:57:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:57:27 localhost podman[308537]: 2025-11-23 09:57:27.887073534 +0000 UTC m=+0.076226863 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Nov 23 04:57:27 localhost podman[308537]: 2025-11-23 09:57:27.92441852 +0000 UTC m=+0.113571839 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, name=ubi9-minimal, architecture=x86_64, vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 04:57:27 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:57:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v68: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v69: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:57:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:57:31 localhost podman[308557]: 2025-11-23 09:57:31.891134368 +0000 UTC m=+0.074745676 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:57:31 localhost podman[308557]: 2025-11-23 09:57:31.927465694 +0000 UTC m=+0.111077002 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:57:31 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:57:31 localhost podman[308556]: 2025-11-23 09:57:31.954592197 +0000 UTC m=+0.141060222 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 04:57:31 localhost podman[308556]: 2025-11-23 09:57:31.96740183 +0000 UTC m=+0.153869845 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 23 04:57:31 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:57:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:57:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:57:32.659 262301 INFO neutron.agent.linux.ip_lib [None req-bcde0f9f-e0e0-4564-8edd-0bc6a728f094 - - - - - -] Device tapa46670eb-74 cannot be used as it has no MAC address#033[00m Nov 23 04:57:32 localhost kernel: device tapa46670eb-74 entered promiscuous mode Nov 23 04:57:32 localhost NetworkManager[5966]: [1763891852.6870] manager: (tapa46670eb-74): new Generic device (/org/freedesktop/NetworkManager/Devices/14) Nov 23 04:57:32 localhost ovn_controller[153771]: 2025-11-23T09:57:32Z|00030|binding|INFO|Claiming lport a46670eb-74db-4098-8d00-3a08a57da283 for this chassis. Nov 23 04:57:32 localhost ovn_controller[153771]: 2025-11-23T09:57:32Z|00031|binding|INFO|a46670eb-74db-4098-8d00-3a08a57da283: Claiming unknown Nov 23 04:57:32 localhost systemd-udevd[308678]: Network interface NamePolicy= disabled on kernel command line. Nov 23 04:57:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:57:32.702 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-81348c6d-951a-4399-8703-476056b57fe9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81348c6d-951a-4399-8703-476056b57fe9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2148c18d8f24a6db12dc22c787e8b2e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1897b64f-0c37-45be-8353-f858f64309cd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a46670eb-74db-4098-8d00-3a08a57da283) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:57:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:57:32.704 159415 INFO neutron.agent.ovn.metadata.agent [-] Port a46670eb-74db-4098-8d00-3a08a57da283 in datapath 81348c6d-951a-4399-8703-476056b57fe9 bound to our chassis#033[00m Nov 23 04:57:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:57:32.705 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 81348c6d-951a-4399-8703-476056b57fe9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 04:57:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:57:32.708 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa13e4e-7f5e-4ef5-bf68-034adf43e3ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:57:32 localhost ovn_controller[153771]: 2025-11-23T09:57:32Z|00032|binding|INFO|Setting lport a46670eb-74db-4098-8d00-3a08a57da283 ovn-installed in OVS Nov 23 04:57:32 localhost ovn_controller[153771]: 2025-11-23T09:57:32Z|00033|binding|INFO|Setting lport a46670eb-74db-4098-8d00-3a08a57da283 up in Southbound Nov 23 04:57:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 04:57:32 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 04:57:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 04:57:32 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:57:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 04:57:32 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:57:32 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev e1f02c6b-ca71-42e4-966f-46269a3e71ea (Updating node-proxy deployment (+3 -> 3)) Nov 23 04:57:32 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev e1f02c6b-ca71-42e4-966f-46269a3e71ea (Updating node-proxy deployment (+3 -> 3)) Nov 23 04:57:32 localhost ceph-mgr[286671]: [progress INFO root] Completed event e1f02c6b-ca71-42e4-966f-46269a3e71ea (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 04:57:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 04:57:32 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 04:57:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v70: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:33 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 04:57:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 04:57:33 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:57:33 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:57:33 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:57:33 localhost podman[308751]: Nov 23 04:57:33 localhost podman[308751]: 2025-11-23 09:57:33.646911885 +0000 UTC m=+0.083068981 container create 8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-81348c6d-951a-4399-8703-476056b57fe9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 04:57:33 localhost systemd[1]: Started libpod-conmon-8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447.scope. Nov 23 04:57:33 localhost podman[308751]: 2025-11-23 09:57:33.608776154 +0000 UTC m=+0.044933220 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 04:57:33 localhost systemd[1]: Started libcrun container. Nov 23 04:57:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e148735a585f418c075140eb26ac8b56c0b9707e535652ba24a9b590724b722/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 04:57:33 localhost podman[308751]: 2025-11-23 09:57:33.722257209 +0000 UTC m=+0.158414315 container init 8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-81348c6d-951a-4399-8703-476056b57fe9, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 23 04:57:33 localhost podman[308751]: 2025-11-23 09:57:33.732898905 +0000 UTC m=+0.169056001 container start 8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-81348c6d-951a-4399-8703-476056b57fe9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:57:33 localhost dnsmasq[308770]: started, version 2.85 cachesize 150 Nov 23 04:57:33 localhost dnsmasq[308770]: DNS service limited to local subnets Nov 23 04:57:33 localhost dnsmasq[308770]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 04:57:33 localhost dnsmasq[308770]: warning: no upstream servers configured Nov 23 04:57:33 localhost dnsmasq-dhcp[308770]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 04:57:33 localhost dnsmasq[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/addn_hosts - 0 addresses Nov 23 04:57:33 localhost dnsmasq-dhcp[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/host Nov 23 04:57:33 localhost dnsmasq-dhcp[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/opts Nov 23 04:57:33 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:57:33.882 262301 INFO neutron.agent.dhcp.agent [None req-f220949f-b7ea-44bc-be7a-c57e1f05814f - - - - - -] DHCP configuration for ports {'bb526e17-a505-43fd-a1af-511960f787ee'} is completed#033[00m Nov 23 04:57:34 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:57:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v71: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:35 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:57:35.319 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:57:35Z, description=, device_id=b378c727-9d77-4582-a9ca-944830efc847, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0dc2e7d1-6618-4cc2-89f9-62372076e927, ip_allocation=immediate, mac_address=fa:16:3e:6c:0f:72, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:57:31Z, description=, dns_domain=, id=81348c6d-951a-4399-8703-476056b57fe9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-1707444454-network, port_security_enabled=True, project_id=a2148c18d8f24a6db12dc22c787e8b2e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=10752, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=258, status=ACTIVE, subnets=['5ca4204f-7ec4-4ef6-aad3-d2fb61424a58'], tags=[], tenant_id=a2148c18d8f24a6db12dc22c787e8b2e, updated_at=2025-11-23T09:57:31Z, vlan_transparent=None, network_id=81348c6d-951a-4399-8703-476056b57fe9, port_security_enabled=False, project_id=a2148c18d8f24a6db12dc22c787e8b2e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=277, status=DOWN, tags=[], tenant_id=a2148c18d8f24a6db12dc22c787e8b2e, updated_at=2025-11-23T09:57:35Z on network 81348c6d-951a-4399-8703-476056b57fe9#033[00m Nov 23 04:57:35 localhost dnsmasq[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/addn_hosts - 1 addresses Nov 23 04:57:35 localhost dnsmasq-dhcp[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/host Nov 23 04:57:35 localhost podman[308787]: 2025-11-23 09:57:35.53682888 +0000 UTC m=+0.056962330 container kill 8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-81348c6d-951a-4399-8703-476056b57fe9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Nov 23 04:57:35 localhost dnsmasq-dhcp[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/opts Nov 23 04:57:35 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:57:35.767 262301 INFO neutron.agent.dhcp.agent [None req-6e9ad1a6-6c96-4bab-b9f7-c88fbf48abcf - - - - - -] DHCP configuration for ports {'0dc2e7d1-6618-4cc2-89f9-62372076e927'} is completed#033[00m Nov 23 04:57:36 localhost openstack_network_exporter[241732]: ERROR 09:57:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:57:36 localhost openstack_network_exporter[241732]: ERROR 09:57:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:57:36 localhost openstack_network_exporter[241732]: Nov 23 04:57:36 localhost openstack_network_exporter[241732]: ERROR 09:57:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:57:36 localhost openstack_network_exporter[241732]: ERROR 09:57:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:57:36 localhost openstack_network_exporter[241732]: ERROR 09:57:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:57:36 localhost openstack_network_exporter[241732]: Nov 23 04:57:36 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:57:36.790 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:57:35Z, description=, device_id=b378c727-9d77-4582-a9ca-944830efc847, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0dc2e7d1-6618-4cc2-89f9-62372076e927, ip_allocation=immediate, mac_address=fa:16:3e:6c:0f:72, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:57:31Z, description=, dns_domain=, id=81348c6d-951a-4399-8703-476056b57fe9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-1707444454-network, port_security_enabled=True, project_id=a2148c18d8f24a6db12dc22c787e8b2e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=10752, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=258, status=ACTIVE, subnets=['5ca4204f-7ec4-4ef6-aad3-d2fb61424a58'], tags=[], tenant_id=a2148c18d8f24a6db12dc22c787e8b2e, updated_at=2025-11-23T09:57:31Z, vlan_transparent=None, network_id=81348c6d-951a-4399-8703-476056b57fe9, port_security_enabled=False, project_id=a2148c18d8f24a6db12dc22c787e8b2e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=277, status=DOWN, tags=[], tenant_id=a2148c18d8f24a6db12dc22c787e8b2e, updated_at=2025-11-23T09:57:35Z on network 81348c6d-951a-4399-8703-476056b57fe9#033[00m Nov 23 04:57:37 localhost dnsmasq[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/addn_hosts - 1 addresses Nov 23 04:57:37 localhost dnsmasq-dhcp[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/host Nov 23 04:57:37 localhost dnsmasq-dhcp[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/opts Nov 23 04:57:37 localhost podman[308827]: 2025-11-23 09:57:37.003828601 +0000 UTC m=+0.056198987 container kill 8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-81348c6d-951a-4399-8703-476056b57fe9, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 04:57:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v72: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:57:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:57:37.373 262301 INFO neutron.agent.dhcp.agent [None req-5211227e-df65-4981-99bb-8c6609de6257 - - - - - -] DHCP configuration for ports {'0dc2e7d1-6618-4cc2-89f9-62372076e927'} is completed#033[00m Nov 23 04:57:38 localhost ovn_controller[153771]: 2025-11-23T09:57:38Z|00034|ovn_bfd|INFO|Enabled BFD on interface ovn-49b8a0-0 Nov 23 04:57:38 localhost ovn_controller[153771]: 2025-11-23T09:57:38Z|00035|ovn_bfd|INFO|Enabled BFD on interface ovn-b7d5b3-0 Nov 23 04:57:38 localhost ovn_controller[153771]: 2025-11-23T09:57:38Z|00036|ovn_bfd|INFO|Enabled BFD on interface ovn-c237ed-0 Nov 23 04:57:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v73: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v74: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:57:42 localhost ovn_metadata_agent[159410]: 2025-11-23 09:57:42.770 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:57:42 localhost ovn_metadata_agent[159410]: 2025-11-23 09:57:42.772 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 04:57:43 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:57:43.001 262301 INFO neutron.agent.linux.ip_lib [None req-ed48650b-4b6a-45ba-94e2-4122ec9dcacf - - - - - -] Device tapacca5347-96 cannot be used as it has no MAC address#033[00m Nov 23 04:57:43 localhost kernel: device tapacca5347-96 entered promiscuous mode Nov 23 04:57:43 localhost NetworkManager[5966]: [1763891863.0315] manager: (tapacca5347-96): new Generic device (/org/freedesktop/NetworkManager/Devices/15) Nov 23 04:57:43 localhost ovn_controller[153771]: 2025-11-23T09:57:43Z|00037|binding|INFO|Claiming lport acca5347-96e1-4029-9ae5-32051ef01ae8 for this chassis. Nov 23 04:57:43 localhost ovn_controller[153771]: 2025-11-23T09:57:43Z|00038|binding|INFO|acca5347-96e1-4029-9ae5-32051ef01ae8: Claiming unknown Nov 23 04:57:43 localhost systemd-udevd[308861]: Network interface NamePolicy= disabled on kernel command line. Nov 23 04:57:43 localhost ovn_metadata_agent[159410]: 2025-11-23 09:57:43.054 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-549f38a9-abf8-434a-9d69-4d818ecbd4f9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-549f38a9-abf8-434a-9d69-4d818ecbd4f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e0eb6249a0548c0ad772871741f0b5d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f5efb98-6ecb-4dff-9988-46a21664bf5f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=acca5347-96e1-4029-9ae5-32051ef01ae8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:57:43 localhost ovn_metadata_agent[159410]: 2025-11-23 09:57:43.056 159415 INFO neutron.agent.ovn.metadata.agent [-] Port acca5347-96e1-4029-9ae5-32051ef01ae8 in datapath 549f38a9-abf8-434a-9d69-4d818ecbd4f9 bound to our chassis#033[00m Nov 23 04:57:43 localhost ovn_metadata_agent[159410]: 2025-11-23 09:57:43.057 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 549f38a9-abf8-434a-9d69-4d818ecbd4f9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 04:57:43 localhost ovn_metadata_agent[159410]: 2025-11-23 09:57:43.058 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[e14cdb21-1ec9-4de9-a85e-d9ac412e369d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:57:43 localhost journal[229336]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, ) Nov 23 04:57:43 localhost journal[229336]: hostname: np0005532584.localdomain Nov 23 04:57:43 localhost journal[229336]: ethtool ioctl error on tapacca5347-96: No such device Nov 23 04:57:43 localhost ovn_controller[153771]: 2025-11-23T09:57:43Z|00039|binding|INFO|Setting lport acca5347-96e1-4029-9ae5-32051ef01ae8 ovn-installed in OVS Nov 23 04:57:43 localhost journal[229336]: ethtool ioctl error on tapacca5347-96: No such device Nov 23 04:57:43 localhost ovn_controller[153771]: 2025-11-23T09:57:43Z|00040|binding|INFO|Setting lport acca5347-96e1-4029-9ae5-32051ef01ae8 up in Southbound Nov 23 04:57:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:57:43 localhost journal[229336]: ethtool ioctl error on tapacca5347-96: No such device Nov 23 04:57:43 localhost journal[229336]: ethtool ioctl error on tapacca5347-96: No such device Nov 23 04:57:43 localhost journal[229336]: ethtool ioctl error on tapacca5347-96: No such device Nov 23 04:57:43 localhost journal[229336]: ethtool ioctl error on tapacca5347-96: No such device Nov 23 04:57:43 localhost journal[229336]: ethtool ioctl error on tapacca5347-96: No such device Nov 23 04:57:43 localhost journal[229336]: ethtool ioctl error on tapacca5347-96: No such device Nov 23 04:57:43 localhost podman[308870]: 2025-11-23 09:57:43.178109607 +0000 UTC m=+0.096775163 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Nov 23 04:57:43 localhost podman[308870]: 2025-11-23 09:57:43.206809008 +0000 UTC m=+0.125474514 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:57:43 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:57:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v75: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:43 localhost podman[308950]: Nov 23 04:57:43 localhost podman[308950]: 2025-11-23 09:57:43.975920382 +0000 UTC m=+0.064596444 container create a8f03ea8621bfb0a3711a12b210c8fb8d9fd297825fef1443227ba2836d938b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-549f38a9-abf8-434a-9d69-4d818ecbd4f9, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:57:44 localhost systemd[1]: Started libpod-conmon-a8f03ea8621bfb0a3711a12b210c8fb8d9fd297825fef1443227ba2836d938b9.scope. Nov 23 04:57:44 localhost systemd[1]: Started libcrun container. Nov 23 04:57:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e89bb96670cdb05f6dfc2b3bccd242d493787fe55d0247dbc7ec6db9780e52/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 04:57:44 localhost podman[308950]: 2025-11-23 09:57:44.043484996 +0000 UTC m=+0.132161078 container init a8f03ea8621bfb0a3711a12b210c8fb8d9fd297825fef1443227ba2836d938b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-549f38a9-abf8-434a-9d69-4d818ecbd4f9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:57:44 localhost podman[308950]: 2025-11-23 09:57:43.944196507 +0000 UTC m=+0.032872599 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 04:57:44 localhost podman[308950]: 2025-11-23 09:57:44.052967717 +0000 UTC m=+0.141643809 container start a8f03ea8621bfb0a3711a12b210c8fb8d9fd297825fef1443227ba2836d938b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-549f38a9-abf8-434a-9d69-4d818ecbd4f9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:57:44 localhost dnsmasq[308969]: started, version 2.85 cachesize 150 Nov 23 04:57:44 localhost dnsmasq[308969]: DNS service limited to local subnets Nov 23 04:57:44 localhost dnsmasq[308969]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 04:57:44 localhost dnsmasq[308969]: warning: no upstream servers configured Nov 23 04:57:44 localhost dnsmasq-dhcp[308969]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 04:57:44 localhost dnsmasq[308969]: read /var/lib/neutron/dhcp/549f38a9-abf8-434a-9d69-4d818ecbd4f9/addn_hosts - 0 addresses Nov 23 04:57:44 localhost dnsmasq-dhcp[308969]: read /var/lib/neutron/dhcp/549f38a9-abf8-434a-9d69-4d818ecbd4f9/host Nov 23 04:57:44 localhost dnsmasq-dhcp[308969]: read /var/lib/neutron/dhcp/549f38a9-abf8-434a-9d69-4d818ecbd4f9/opts Nov 23 04:57:44 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:57:44.175 262301 INFO neutron.agent.dhcp.agent [None req-326a5eb8-a54f-4e87-bc68-f95d9efe1955 - - - - - -] DHCP configuration for ports {'ca595ff2-5001-4804-9597-c7c1662762ce'} is completed#033[00m Nov 23 04:57:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v76: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:47 localhost podman[239764]: time="2025-11-23T09:57:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:57:47 localhost podman[239764]: @ - - [23/Nov/2025:09:57:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158147 "" "Go-http-client/1.1" Nov 23 04:57:47 localhost podman[239764]: @ - - [23/Nov/2025:09:57:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19648 "" "Go-http-client/1.1" Nov 23 04:57:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v77: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:57:48 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:57:48.764 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:57:47Z, description=, device_id=d7833880-0891-4456-a71b-58120feefed8, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=80d0229b-c00b-48e5-958d-d34b5f25a550, ip_allocation=immediate, mac_address=fa:16:3e:5e:c6:6b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:57:40Z, description=, dns_domain=, id=549f38a9-abf8-434a-9d69-4d818ecbd4f9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-739237220-network, port_security_enabled=True, project_id=9e0eb6249a0548c0ad772871741f0b5d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=22836, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=343, status=ACTIVE, subnets=['67e20cd0-01c1-4359-bbf6-d24231a1089d'], tags=[], tenant_id=9e0eb6249a0548c0ad772871741f0b5d, updated_at=2025-11-23T09:57:41Z, vlan_transparent=None, network_id=549f38a9-abf8-434a-9d69-4d818ecbd4f9, port_security_enabled=False, project_id=9e0eb6249a0548c0ad772871741f0b5d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=368, status=DOWN, tags=[], tenant_id=9e0eb6249a0548c0ad772871741f0b5d, updated_at=2025-11-23T09:57:48Z on network 549f38a9-abf8-434a-9d69-4d818ecbd4f9#033[00m Nov 23 04:57:49 localhost dnsmasq[308969]: read /var/lib/neutron/dhcp/549f38a9-abf8-434a-9d69-4d818ecbd4f9/addn_hosts - 1 addresses Nov 23 04:57:49 localhost dnsmasq-dhcp[308969]: read /var/lib/neutron/dhcp/549f38a9-abf8-434a-9d69-4d818ecbd4f9/host Nov 23 04:57:49 localhost dnsmasq-dhcp[308969]: read /var/lib/neutron/dhcp/549f38a9-abf8-434a-9d69-4d818ecbd4f9/opts Nov 23 04:57:49 localhost podman[308987]: 2025-11-23 09:57:49.009148214 +0000 UTC m=+0.070406983 container kill a8f03ea8621bfb0a3711a12b210c8fb8d9fd297825fef1443227ba2836d938b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-549f38a9-abf8-434a-9d69-4d818ecbd4f9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:57:49 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:57:49.243 262301 INFO neutron.agent.dhcp.agent [None req-bcf0708a-7c80-4252-b8a4-cfb2d9541783 - - - - - -] DHCP configuration for ports {'80d0229b-c00b-48e5-958d-d34b5f25a550'} is completed#033[00m Nov 23 04:57:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v78: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:50 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:57:50.106 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:57:47Z, description=, device_id=d7833880-0891-4456-a71b-58120feefed8, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=80d0229b-c00b-48e5-958d-d34b5f25a550, ip_allocation=immediate, mac_address=fa:16:3e:5e:c6:6b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:57:40Z, description=, dns_domain=, id=549f38a9-abf8-434a-9d69-4d818ecbd4f9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-739237220-network, port_security_enabled=True, project_id=9e0eb6249a0548c0ad772871741f0b5d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=22836, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=343, status=ACTIVE, subnets=['67e20cd0-01c1-4359-bbf6-d24231a1089d'], tags=[], tenant_id=9e0eb6249a0548c0ad772871741f0b5d, updated_at=2025-11-23T09:57:41Z, vlan_transparent=None, network_id=549f38a9-abf8-434a-9d69-4d818ecbd4f9, port_security_enabled=False, project_id=9e0eb6249a0548c0ad772871741f0b5d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=368, status=DOWN, tags=[], tenant_id=9e0eb6249a0548c0ad772871741f0b5d, updated_at=2025-11-23T09:57:48Z on network 549f38a9-abf8-434a-9d69-4d818ecbd4f9#033[00m Nov 23 04:57:50 localhost dnsmasq[308969]: read /var/lib/neutron/dhcp/549f38a9-abf8-434a-9d69-4d818ecbd4f9/addn_hosts - 1 addresses Nov 23 04:57:50 localhost dnsmasq-dhcp[308969]: read /var/lib/neutron/dhcp/549f38a9-abf8-434a-9d69-4d818ecbd4f9/host Nov 23 04:57:50 localhost dnsmasq-dhcp[308969]: read /var/lib/neutron/dhcp/549f38a9-abf8-434a-9d69-4d818ecbd4f9/opts Nov 23 04:57:50 localhost podman[309027]: 2025-11-23 09:57:50.336928681 +0000 UTC m=+0.047865841 container kill a8f03ea8621bfb0a3711a12b210c8fb8d9fd297825fef1443227ba2836d938b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-549f38a9-abf8-434a-9d69-4d818ecbd4f9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 04:57:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:57:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:57:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:57:50 localhost podman[309043]: 2025-11-23 09:57:50.482399757 +0000 UTC m=+0.097621247 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:57:50 localhost podman[309043]: 2025-11-23 09:57:50.515214075 +0000 UTC m=+0.130435555 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:57:50 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:57:50 localhost podman[309041]: 2025-11-23 09:57:50.531131314 +0000 UTC m=+0.151744660 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 23 04:57:50 localhost podman[309041]: 2025-11-23 09:57:50.548377073 +0000 UTC m=+0.168990409 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 23 04:57:50 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:57:50 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:57:50.595 262301 INFO neutron.agent.dhcp.agent [None req-73f11f9e-66b3-46dd-b5e6-883b2cabc749 - - - - - -] DHCP configuration for ports {'80d0229b-c00b-48e5-958d-d34b5f25a550'} is completed#033[00m Nov 23 04:57:50 localhost podman[309049]: 2025-11-23 09:57:50.632833776 +0000 UTC m=+0.240569047 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:57:50 localhost podman[309049]: 2025-11-23 09:57:50.709326944 +0000 UTC m=+0.317062245 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:57:50 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:57:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v79: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:52 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:57:52 localhost ovn_metadata_agent[159410]: 2025-11-23 09:57:52.774 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:57:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v80: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:57:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:57:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:57:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:57:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:57:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:57:54 localhost neutron_sriov_agent[255165]: 2025-11-23 09:57:54.040 2 INFO neutron.agent.securitygroups_rpc [None req-c244d218-e6b7-4260-9702-7bb508c7ef68 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Security group member updated ['ff44a28d-1e1f-4163-b206-fdf77022bf0b']#033[00m Nov 23 04:57:54 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:57:54.076 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:57:53Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=27d340a7-60a4-4a73-9f16-bae5ab3411da, ip_allocation=immediate, mac_address=fa:16:3e:fe:c3:5c, name=tempest-parent-2092561411, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:57:31Z, description=, dns_domain=, id=81348c6d-951a-4399-8703-476056b57fe9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-1707444454-network, port_security_enabled=True, project_id=a2148c18d8f24a6db12dc22c787e8b2e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=10752, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=258, status=ACTIVE, subnets=['5ca4204f-7ec4-4ef6-aad3-d2fb61424a58'], tags=[], tenant_id=a2148c18d8f24a6db12dc22c787e8b2e, updated_at=2025-11-23T09:57:31Z, vlan_transparent=None, network_id=81348c6d-951a-4399-8703-476056b57fe9, port_security_enabled=True, project_id=a2148c18d8f24a6db12dc22c787e8b2e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['ff44a28d-1e1f-4163-b206-fdf77022bf0b'], standard_attr_id=404, status=DOWN, tags=[], tenant_id=a2148c18d8f24a6db12dc22c787e8b2e, updated_at=2025-11-23T09:57:53Z on network 81348c6d-951a-4399-8703-476056b57fe9#033[00m Nov 23 04:57:54 localhost podman[309130]: 2025-11-23 09:57:54.326178399 +0000 UTC m=+0.063244072 container kill 8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-81348c6d-951a-4399-8703-476056b57fe9, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:57:54 localhost dnsmasq[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/addn_hosts - 2 addresses Nov 23 04:57:54 localhost dnsmasq-dhcp[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/host Nov 23 04:57:54 localhost dnsmasq-dhcp[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/opts Nov 23 04:57:54 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:57:54.547 262301 INFO neutron.agent.dhcp.agent [None req-20e9c875-ede8-44ed-9ff8-9ad06d57212f - - - - - -] DHCP configuration for ports {'27d340a7-60a4-4a73-9f16-bae5ab3411da'} is completed#033[00m Nov 23 04:57:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e89 do_prune osdmap full prune enabled Nov 23 04:57:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e90 e90: 6 total, 6 up, 6 in Nov 23 04:57:55 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e90: 6 total, 6 up, 6 in Nov 23 04:57:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v82: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v83: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 23 04:57:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:57:58 localhost neutron_sriov_agent[255165]: 2025-11-23 09:57:58.476 2 INFO neutron.agent.securitygroups_rpc [None req-d68d9a06-c0e6-4bcf-9e27-a376d467ec2a 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Security group member updated ['ff44a28d-1e1f-4163-b206-fdf77022bf0b']#033[00m Nov 23 04:57:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:57:58 localhost podman[309150]: 2025-11-23 09:57:58.897892122 +0000 UTC m=+0.083136533 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, container_name=openstack_network_exporter, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Nov 23 04:57:58 localhost podman[309150]: 2025-11-23 09:57:58.915566615 +0000 UTC m=+0.100811006 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350, name=ubi9-minimal, container_name=openstack_network_exporter, version=9.6, io.openshift.tags=minimal rhel9, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Nov 23 04:57:58 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:57:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v84: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s Nov 23 04:58:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e90 do_prune osdmap full prune enabled Nov 23 04:58:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e91 e91: 6 total, 6 up, 6 in Nov 23 04:58:00 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e91: 6 total, 6 up, 6 in Nov 23 04:58:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v86: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s Nov 23 04:58:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:58:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:58:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:58:02 localhost systemd[1]: tmp-crun.V78RLo.mount: Deactivated successfully. Nov 23 04:58:02 localhost podman[309171]: 2025-11-23 09:58:02.907415384 +0000 UTC m=+0.092204812 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:58:02 localhost podman[309171]: 2025-11-23 09:58:02.91606754 +0000 UTC m=+0.100856968 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm) Nov 23 04:58:02 localhost podman[309172]: 2025-11-23 09:58:02.951799487 +0000 UTC m=+0.131576131 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:58:02 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:58:02 localhost podman[309172]: 2025-11-23 09:58:02.984642605 +0000 UTC m=+0.164419299 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:58:02 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:58:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v87: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s Nov 23 04:58:05 localhost nova_compute[280939]: 2025-11-23 09:58:05.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:58:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v88: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 3.0 KiB/s wr, 39 op/s Nov 23 04:58:05 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:05.857 262301 INFO neutron.agent.linux.ip_lib [None req-fac0b047-8786-4795-8c3f-f9adc4235017 - - - - - -] Device tap3f6ffc5e-50 cannot be used as it has no MAC address#033[00m Nov 23 04:58:05 localhost kernel: device tap3f6ffc5e-50 entered promiscuous mode Nov 23 04:58:05 localhost ovn_controller[153771]: 2025-11-23T09:58:05Z|00041|binding|INFO|Claiming lport 3f6ffc5e-50ad-4f62-8a0b-f8f2aa579eac for this chassis. Nov 23 04:58:05 localhost ovn_controller[153771]: 2025-11-23T09:58:05Z|00042|binding|INFO|3f6ffc5e-50ad-4f62-8a0b-f8f2aa579eac: Claiming unknown Nov 23 04:58:05 localhost NetworkManager[5966]: [1763891885.9321] manager: (tap3f6ffc5e-50): new Generic device (/org/freedesktop/NetworkManager/Devices/16) Nov 23 04:58:05 localhost systemd-udevd[309222]: Network interface NamePolicy= disabled on kernel command line. Nov 23 04:58:05 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:05.940 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-d679e465-8656-4403-afa0-724657d33ec4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d679e465-8656-4403-afa0-724657d33ec4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '253c88568a634476a6c1284eed6a9464', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a90e812-f218-49cd-a3ab-6bc1317ad730, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3f6ffc5e-50ad-4f62-8a0b-f8f2aa579eac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:58:05 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:05.942 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 3f6ffc5e-50ad-4f62-8a0b-f8f2aa579eac in datapath d679e465-8656-4403-afa0-724657d33ec4 bound to our chassis#033[00m Nov 23 04:58:05 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:05.945 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port a4ae488d-6e50-4466-beba-eaab4efb551d IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 04:58:05 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:05.945 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d679e465-8656-4403-afa0-724657d33ec4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 04:58:05 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:05.946 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[6008dfad-c407-4d54-8913-cb51a66b7848]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:05 localhost journal[229336]: ethtool ioctl error on tap3f6ffc5e-50: No such device Nov 23 04:58:05 localhost ovn_controller[153771]: 2025-11-23T09:58:05Z|00043|binding|INFO|Setting lport 3f6ffc5e-50ad-4f62-8a0b-f8f2aa579eac ovn-installed in OVS Nov 23 04:58:05 localhost ovn_controller[153771]: 2025-11-23T09:58:05Z|00044|binding|INFO|Setting lport 3f6ffc5e-50ad-4f62-8a0b-f8f2aa579eac up in Southbound Nov 23 04:58:05 localhost journal[229336]: ethtool ioctl error on tap3f6ffc5e-50: No such device Nov 23 04:58:05 localhost journal[229336]: ethtool ioctl error on tap3f6ffc5e-50: No such device Nov 23 04:58:05 localhost journal[229336]: ethtool ioctl error on tap3f6ffc5e-50: No such device Nov 23 04:58:05 localhost journal[229336]: ethtool ioctl error on tap3f6ffc5e-50: No such device Nov 23 04:58:05 localhost journal[229336]: ethtool ioctl error on tap3f6ffc5e-50: No such device Nov 23 04:58:05 localhost journal[229336]: ethtool ioctl error on tap3f6ffc5e-50: No such device Nov 23 04:58:05 localhost journal[229336]: ethtool ioctl error on tap3f6ffc5e-50: No such device Nov 23 04:58:06 localhost nova_compute[280939]: 2025-11-23 09:58:06.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:58:06 localhost nova_compute[280939]: 2025-11-23 09:58:06.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:58:06 localhost nova_compute[280939]: 2025-11-23 09:58:06.134 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Nov 23 04:58:06 localhost nova_compute[280939]: 2025-11-23 09:58:06.164 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Nov 23 04:58:06 localhost openstack_network_exporter[241732]: ERROR 09:58:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:58:06 localhost openstack_network_exporter[241732]: ERROR 09:58:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:58:06 localhost openstack_network_exporter[241732]: ERROR 09:58:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:58:06 localhost openstack_network_exporter[241732]: ERROR 09:58:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:58:06 localhost openstack_network_exporter[241732]: Nov 23 04:58:06 localhost openstack_network_exporter[241732]: ERROR 09:58:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:58:06 localhost openstack_network_exporter[241732]: Nov 23 04:58:06 localhost podman[309295]: Nov 23 04:58:06 localhost podman[309295]: 2025-11-23 09:58:06.925178589 +0000 UTC m=+0.095054259 container create 5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d679e465-8656-4403-afa0-724657d33ec4, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:58:06 localhost systemd[1]: Started libpod-conmon-5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473.scope. Nov 23 04:58:06 localhost podman[309295]: 2025-11-23 09:58:06.878589938 +0000 UTC m=+0.048465628 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 04:58:06 localhost systemd[1]: Started libcrun container. Nov 23 04:58:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc87de287b85c9211b7becc819793dcba1621a8b3227ff3622d2ce5fb11b42c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 04:58:07 localhost podman[309295]: 2025-11-23 09:58:07.00503523 +0000 UTC m=+0.174910900 container init 5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d679e465-8656-4403-afa0-724657d33ec4, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 23 04:58:07 localhost podman[309295]: 2025-11-23 09:58:07.018902036 +0000 UTC m=+0.188777706 container start 5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d679e465-8656-4403-afa0-724657d33ec4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0) Nov 23 04:58:07 localhost dnsmasq[309314]: started, version 2.85 cachesize 150 Nov 23 04:58:07 localhost dnsmasq[309314]: DNS service limited to local subnets Nov 23 04:58:07 localhost dnsmasq[309314]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 04:58:07 localhost dnsmasq[309314]: warning: no upstream servers configured Nov 23 04:58:07 localhost dnsmasq-dhcp[309314]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 04:58:07 localhost dnsmasq[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/addn_hosts - 0 addresses Nov 23 04:58:07 localhost dnsmasq-dhcp[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/host Nov 23 04:58:07 localhost dnsmasq-dhcp[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/opts Nov 23 04:58:07 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:07.160 262301 INFO neutron.agent.dhcp.agent [None req-417a0dac-24b5-4ad9-8abc-0d4535d16243 - - - - - -] DHCP configuration for ports {'9b50ca15-3b72-42c0-998b-33441ea57460'} is completed#033[00m Nov 23 04:58:07 localhost nova_compute[280939]: 2025-11-23 09:58:07.165 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:58:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v89: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 3.0 KiB/s wr, 39 op/s Nov 23 04:58:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:58:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e91 do_prune osdmap full prune enabled Nov 23 04:58:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e92 e92: 6 total, 6 up, 6 in Nov 23 04:58:07 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e92: 6 total, 6 up, 6 in Nov 23 04:58:07 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:07.678 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:58:07Z, description=, device_id=a142a3f2-a258-4397-9ea5-84e0cd42ff93, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4dba18fe-34a7-4408-95ce-a6c3c8755c68, ip_allocation=immediate, mac_address=fa:16:3e:63:4d:97, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:58:02Z, description=, dns_domain=, id=d679e465-8656-4403-afa0-724657d33ec4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-49202206-network, port_security_enabled=True, project_id=253c88568a634476a6c1284eed6a9464, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53014, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=483, status=ACTIVE, subnets=['bb39aec6-4f19-4dfe-a775-a545c3d0f74a'], tags=[], tenant_id=253c88568a634476a6c1284eed6a9464, updated_at=2025-11-23T09:58:03Z, vlan_transparent=None, network_id=d679e465-8656-4403-afa0-724657d33ec4, port_security_enabled=False, project_id=253c88568a634476a6c1284eed6a9464, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=497, status=DOWN, tags=[], tenant_id=253c88568a634476a6c1284eed6a9464, updated_at=2025-11-23T09:58:07Z on network d679e465-8656-4403-afa0-724657d33ec4#033[00m Nov 23 04:58:07 localhost dnsmasq[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/addn_hosts - 1 addresses Nov 23 04:58:07 localhost dnsmasq-dhcp[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/host Nov 23 04:58:07 localhost podman[309331]: 2025-11-23 09:58:07.903923328 +0000 UTC m=+0.064504251 container kill 5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d679e465-8656-4403-afa0-724657d33ec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:58:07 localhost dnsmasq-dhcp[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/opts Nov 23 04:58:07 localhost systemd[1]: tmp-crun.KdSiqt.mount: Deactivated successfully. Nov 23 04:58:08 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:08.107 262301 INFO neutron.agent.dhcp.agent [None req-f5c960ab-dc27-489c-b849-2a79eb5b58ed - - - - - -] DHCP configuration for ports {'4dba18fe-34a7-4408-95ce-a6c3c8755c68'} is completed#033[00m Nov 23 04:58:08 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:08.521 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005532586.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:57:53Z, description=, device_id=76d6f171-13c9-4730-8ed3-ab467ef6831a, device_owner=compute:nova, dns_assignment=[], dns_domain=, dns_name=tempest-liveautoblockmigrationv225test-server-151326874, extra_dhcp_opts=[], fixed_ips=[], id=27d340a7-60a4-4a73-9f16-bae5ab3411da, ip_allocation=immediate, mac_address=fa:16:3e:fe:c3:5c, name=tempest-parent-2092561411, network_id=81348c6d-951a-4399-8703-476056b57fe9, port_security_enabled=True, project_id=a2148c18d8f24a6db12dc22c787e8b2e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['ff44a28d-1e1f-4163-b206-fdf77022bf0b'], standard_attr_id=404, status=DOWN, tags=[], tenant_id=a2148c18d8f24a6db12dc22c787e8b2e, trunk_details=sub_ports=[], trunk_id=c096332d-2835-45dd-944d-79d0f9cdb00a, updated_at=2025-11-23T09:58:08Z on network 81348c6d-951a-4399-8703-476056b57fe9#033[00m Nov 23 04:58:08 localhost podman[309369]: 2025-11-23 09:58:08.744374942 +0000 UTC m=+0.052094530 container kill 8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-81348c6d-951a-4399-8703-476056b57fe9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 04:58:08 localhost dnsmasq[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/addn_hosts - 2 addresses Nov 23 04:58:08 localhost dnsmasq-dhcp[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/host Nov 23 04:58:08 localhost dnsmasq-dhcp[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/opts Nov 23 04:58:08 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:08.959 262301 INFO neutron.agent.dhcp.agent [None req-4f3b7656-de18-4673-9985-d3d5d09a0f43 - - - - - -] DHCP configuration for ports {'27d340a7-60a4-4a73-9f16-bae5ab3411da'} is completed#033[00m Nov 23 04:58:09 localhost nova_compute[280939]: 2025-11-23 09:58:09.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:58:09 localhost nova_compute[280939]: 2025-11-23 09:58:09.134 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:58:09 localhost nova_compute[280939]: 2025-11-23 09:58:09.135 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:58:09 localhost nova_compute[280939]: 2025-11-23 09:58:09.152 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:58:09 localhost nova_compute[280939]: 2025-11-23 09:58:09.153 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:58:09 localhost nova_compute[280939]: 2025-11-23 09:58:09.153 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:58:09 localhost nova_compute[280939]: 2025-11-23 09:58:09.154 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:58:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v91: 177 pgs: 177 active+clean; 170 MiB data, 705 MiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 628 KiB/s wr, 57 op/s Nov 23 04:58:09 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:09.697 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:58:07Z, description=, device_id=a142a3f2-a258-4397-9ea5-84e0cd42ff93, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4dba18fe-34a7-4408-95ce-a6c3c8755c68, ip_allocation=immediate, mac_address=fa:16:3e:63:4d:97, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:58:02Z, description=, dns_domain=, id=d679e465-8656-4403-afa0-724657d33ec4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-49202206-network, port_security_enabled=True, project_id=253c88568a634476a6c1284eed6a9464, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53014, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=483, status=ACTIVE, subnets=['bb39aec6-4f19-4dfe-a775-a545c3d0f74a'], tags=[], tenant_id=253c88568a634476a6c1284eed6a9464, updated_at=2025-11-23T09:58:03Z, vlan_transparent=None, network_id=d679e465-8656-4403-afa0-724657d33ec4, port_security_enabled=False, project_id=253c88568a634476a6c1284eed6a9464, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=497, status=DOWN, tags=[], tenant_id=253c88568a634476a6c1284eed6a9464, updated_at=2025-11-23T09:58:07Z on network d679e465-8656-4403-afa0-724657d33ec4#033[00m Nov 23 04:58:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:09.740 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:09.741 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:09.741 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:09 localhost dnsmasq[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/addn_hosts - 1 addresses Nov 23 04:58:09 localhost dnsmasq-dhcp[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/host Nov 23 04:58:09 localhost dnsmasq-dhcp[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/opts Nov 23 04:58:09 localhost systemd[1]: tmp-crun.8vsElX.mount: Deactivated successfully. Nov 23 04:58:09 localhost podman[309408]: 2025-11-23 09:58:09.908146704 +0000 UTC m=+0.062995546 container kill 5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d679e465-8656-4403-afa0-724657d33ec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:58:10 localhost nova_compute[280939]: 2025-11-23 09:58:10.147 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:58:10 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:10.184 262301 INFO neutron.agent.dhcp.agent [None req-bd44a4fe-1467-4190-a9cf-1e3c05e72354 - - - - - -] DHCP configuration for ports {'4dba18fe-34a7-4408-95ce-a6c3c8755c68'} is completed#033[00m Nov 23 04:58:10 localhost neutron_sriov_agent[255165]: 2025-11-23 09:58:10.582 2 INFO neutron.agent.securitygroups_rpc [req-3532c496-51d7-40c7-b3da-c0e7be1692a4 req-d87dead5-02e8-46e7-bd25-42d652af07f6 b79ba98acc3c4b3580a3847feb119c9b 103398a293414a3081333eb24455a6bd - - default default] Security group rule updated ['280efa91-c004-412c-b87a-91a6eef9493c']#033[00m Nov 23 04:58:11 localhost neutron_sriov_agent[255165]: 2025-11-23 09:58:11.078 2 INFO neutron.agent.securitygroups_rpc [req-ce34b552-7369-4724-9e28-4e57bb3059bd req-8a77dff8-1df4-4326-b30f-4088438850bd b79ba98acc3c4b3580a3847feb119c9b 103398a293414a3081333eb24455a6bd - - default default] Security group rule updated ['280efa91-c004-412c-b87a-91a6eef9493c']#033[00m Nov 23 04:58:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v92: 177 pgs: 177 active+clean; 170 MiB data, 705 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 558 KiB/s wr, 51 op/s Nov 23 04:58:12 localhost nova_compute[280939]: 2025-11-23 09:58:12.128 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:58:12 localhost nova_compute[280939]: 2025-11-23 09:58:12.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:58:12 localhost nova_compute[280939]: 2025-11-23 09:58:12.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Nov 23 04:58:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:58:13 localhost nova_compute[280939]: 2025-11-23 09:58:13.144 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:58:13 localhost nova_compute[280939]: 2025-11-23 09:58:13.171 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:13 localhost nova_compute[280939]: 2025-11-23 09:58:13.172 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:13 localhost nova_compute[280939]: 2025-11-23 09:58:13.172 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:13 localhost nova_compute[280939]: 2025-11-23 09:58:13.173 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:58:13 localhost nova_compute[280939]: 2025-11-23 09:58:13.173 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v93: 177 pgs: 177 active+clean; 170 MiB data, 705 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 558 KiB/s wr, 51 op/s Nov 23 04:58:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:58:13 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2677225766' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:58:13 localhost nova_compute[280939]: 2025-11-23 09:58:13.611 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:58:13 localhost nova_compute[280939]: 2025-11-23 09:58:13.824 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:58:13 localhost nova_compute[280939]: 2025-11-23 09:58:13.825 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11900MB free_disk=41.78293991088867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:58:13 localhost nova_compute[280939]: 2025-11-23 09:58:13.825 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:13 localhost nova_compute[280939]: 2025-11-23 09:58:13.826 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:13 localhost podman[309452]: 2025-11-23 09:58:13.903180201 +0000 UTC m=+0.084892798 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:58:13 localhost podman[309452]: 2025-11-23 09:58:13.911405444 +0000 UTC m=+0.093118041 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:58:13 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:58:14 localhost nova_compute[280939]: 2025-11-23 09:58:14.083 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:58:14 localhost nova_compute[280939]: 2025-11-23 09:58:14.083 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:58:14 localhost nova_compute[280939]: 2025-11-23 09:58:14.184 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Refreshing inventories for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 23 04:58:14 localhost nova_compute[280939]: 2025-11-23 09:58:14.251 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Updating ProviderTree inventory for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 23 04:58:14 localhost nova_compute[280939]: 2025-11-23 09:58:14.251 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Updating inventory in ProviderTree for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 23 04:58:14 localhost nova_compute[280939]: 2025-11-23 09:58:14.268 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Refreshing aggregate associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 23 04:58:14 localhost nova_compute[280939]: 2025-11-23 09:58:14.302 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Refreshing trait associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 23 04:58:14 localhost nova_compute[280939]: 2025-11-23 09:58:14.328 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:14 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:58:14 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1097202376' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:58:14 localhost nova_compute[280939]: 2025-11-23 09:58:14.781 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:14 localhost nova_compute[280939]: 2025-11-23 09:58:14.787 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:58:14 localhost nova_compute[280939]: 2025-11-23 09:58:14.814 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:58:14 localhost nova_compute[280939]: 2025-11-23 09:58:14.816 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:58:14 localhost nova_compute[280939]: 2025-11-23 09:58:14.817 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.991s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v94: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 52 op/s Nov 23 04:58:15 localhost nova_compute[280939]: 2025-11-23 09:58:15.806 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:58:17 localhost podman[239764]: time="2025-11-23T09:58:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:58:17 localhost podman[239764]: @ - - [23/Nov/2025:09:58:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159971 "" "Go-http-client/1.1" Nov 23 04:58:17 localhost podman[239764]: @ - - [23/Nov/2025:09:58:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20132 "" "Go-http-client/1.1" Nov 23 04:58:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v95: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 52 op/s Nov 23 04:58:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:58:18 localhost ovn_controller[153771]: 2025-11-23T09:58:18Z|00045|memory|INFO|peak resident set size grew 52% in last 2263.1 seconds, from 12924 kB to 19680 kB Nov 23 04:58:18 localhost ovn_controller[153771]: 2025-11-23T09:58:18Z|00046|memory|INFO|idl-cells-OVN_Southbound:9671 idl-cells-Open_vSwitch:1041 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:152 lflow-cache-entries-cache-matches:219 lflow-cache-size-KB:561 local_datapath_usage-KB:2 ofctrl_desired_flow_usage-KB:286 ofctrl_installed_flow_usage-KB:209 ofctrl_sb_flow_ref_usage-KB:112 Nov 23 04:58:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v96: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 81 op/s Nov 23 04:58:19 localhost nova_compute[280939]: 2025-11-23 09:58:19.823 280943 DEBUG nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Creating tmpfile /var/lib/nova/instances/tmpbkkhnjeq to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m Nov 23 04:58:20 localhost nova_compute[280939]: 2025-11-23 09:58:20.358 280943 DEBUG nova.compute.manager [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] destination check data is LibvirtLiveMigrateData(bdms=,block_migration=,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpbkkhnjeq',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=,is_shared_block_storage=,is_shared_instance_path=,is_volume_backed=,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m Nov 23 04:58:20 localhost nova_compute[280939]: 2025-11-23 09:58:20.387 280943 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 23 04:58:20 localhost nova_compute[280939]: 2025-11-23 09:58:20.387 280943 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 23 04:58:20 localhost nova_compute[280939]: 2025-11-23 09:58:20.394 280943 INFO nova.compute.rpcapi [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m Nov 23 04:58:20 localhost nova_compute[280939]: 2025-11-23 09:58:20.395 280943 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 23 04:58:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:58:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:58:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:58:20 localhost podman[309492]: 2025-11-23 09:58:20.890133468 +0000 UTC m=+0.073774336 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2) Nov 23 04:58:20 localhost podman[309492]: 2025-11-23 09:58:20.903282222 +0000 UTC m=+0.086923100 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:58:20 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:58:20 localhost podman[309494]: 2025-11-23 09:58:20.959943631 +0000 UTC m=+0.135833082 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 23 04:58:21 localhost systemd[1]: tmp-crun.3Lv4vd.mount: Deactivated successfully. Nov 23 04:58:21 localhost podman[309493]: 2025-11-23 09:58:21.041203226 +0000 UTC m=+0.219485071 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:58:21 localhost podman[309494]: 2025-11-23 09:58:21.068425042 +0000 UTC m=+0.244314513 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:58:21 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:58:21 localhost podman[309493]: 2025-11-23 09:58:21.124308577 +0000 UTC m=+0.302590422 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:58:21 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:58:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v97: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 58 op/s Nov 23 04:58:21 localhost neutron_sriov_agent[255165]: 2025-11-23 09:58:21.558 2 INFO neutron.agent.securitygroups_rpc [None req-513d9ac5-08dd-4555-997f-809230181da7 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Security group rule updated ['2da1104f-77c5-475e-b21f-e52710edc8b5']#033[00m Nov 23 04:58:21 localhost neutron_sriov_agent[255165]: 2025-11-23 09:58:21.708 2 INFO neutron.agent.securitygroups_rpc [None req-7741ab62-6798-4d09-b205-555af43d015d 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Security group rule updated ['2da1104f-77c5-475e-b21f-e52710edc8b5']#033[00m Nov 23 04:58:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:58:22 localhost nova_compute[280939]: 2025-11-23 09:58:22.606 280943 DEBUG nova.compute.manager [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpbkkhnjeq',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='76d6f171-13c9-4730-8ed3-ab467ef6831a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m Nov 23 04:58:22 localhost nova_compute[280939]: 2025-11-23 09:58:22.632 280943 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Acquiring lock "refresh_cache-76d6f171-13c9-4730-8ed3-ab467ef6831a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 23 04:58:22 localhost nova_compute[280939]: 2025-11-23 09:58:22.633 280943 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Acquired lock "refresh_cache-76d6f171-13c9-4730-8ed3-ab467ef6831a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 23 04:58:22 localhost nova_compute[280939]: 2025-11-23 09:58:22.633 280943 DEBUG nova.network.neutron [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Nov 23 04:58:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_09:58:23 Nov 23 04:58:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 04:58:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 04:58:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['volumes', 'backups', 'manila_data', '.mgr', 'manila_metadata', 'images', 'vms'] Nov 23 04:58:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 04:58:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v98: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 58 op/s Nov 23 04:58:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 04:58:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:58:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 04:58:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:58:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004817926437744277 of space, bias 1.0, pg target 0.9635852875488554 quantized to 32 (current 32) Nov 23 04:58:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:58:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 04:58:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:58:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8570103846780196 quantized to 32 (current 32) Nov 23 04:58:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:58:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 04:58:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:58:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 04:58:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:58:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001949853433835846 quantized to 16 (current 16) Nov 23 04:58:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:58:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:58:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:58:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:58:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:58:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:58:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 04:58:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 04:58:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 04:58:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 04:58:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 04:58:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 04:58:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 04:58:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 04:58:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 04:58:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.320 280943 DEBUG nova.network.neutron [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Updating instance_info_cache with network_info: [{"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.336 280943 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Releasing lock "refresh_cache-76d6f171-13c9-4730-8ed3-ab467ef6831a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.340 280943 DEBUG nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpbkkhnjeq',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='76d6f171-13c9-4730-8ed3-ab467ef6831a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.341 280943 DEBUG nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Creating instance directory: /var/lib/nova/instances/76d6f171-13c9-4730-8ed3-ab467ef6831a pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.342 280943 DEBUG nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Ensure instance console log exists: /var/lib/nova/instances/76d6f171-13c9-4730-8ed3-ab467ef6831a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.342 280943 DEBUG nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.344 280943 DEBUG nova.virt.libvirt.vif [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T09:58:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-151326874',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005532586.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-151326874',id=6,image_ref='c5806483-57a8-4254-b41b-254b888c8606',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-11-23T09:58:16Z,launched_on='np0005532586.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005532586.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a2148c18d8f24a6db12dc22c787e8b2e',ramdisk_id='',reservation_id='r-6eghyq4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c5806483-57a8-4254-b41b-254b888c8606',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1734069518',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1734069518-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-11-23T09:58:16Z,user_data=None,user_id='9a28cb0574d148bf982a2a1a0b495020',uuid=76d6f171-13c9-4730-8ed3-ab467ef6831a,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.345 280943 DEBUG nova.network.os_vif_util [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Converting VIF {"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.346 280943 DEBUG nova.network.os_vif_util [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c3:5c,bridge_name='br-int',has_traffic_filtering=True,id=27d340a7-60a4-4a73-9f16-bae5ab3411da,network=Network(81348c6d-951a-4399-8703-476056b57fe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap27d340a7-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.347 280943 DEBUG os_vif [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c3:5c,bridge_name='br-int',has_traffic_filtering=True,id=27d340a7-60a4-4a73-9f16-bae5ab3411da,network=Network(81348c6d-951a-4399-8703-476056b57fe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap27d340a7-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.431 280943 DEBUG ovsdbapp.backend.ovs_idl [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.431 280943 DEBUG ovsdbapp.backend.ovs_idl [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.431 280943 DEBUG ovsdbapp.backend.ovs_idl [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.432 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.432 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [POLLOUT] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.433 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.433 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.435 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.438 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.456 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.456 280943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.457 280943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.458 280943 INFO oslo.privsep.daemon [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpz9q8el50/privsep.sock']#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.828 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Acquiring lock "1148b5a9-4da9-491f-8952-80c4a965fe6b" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.829 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lock "1148b5a9-4da9-491f-8952-80c4a965fe6b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.847 280943 DEBUG nova.compute.manager [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.908 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.908 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.913 280943 DEBUG nova.virt.hardware [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Nov 23 04:58:24 localhost nova_compute[280939]: 2025-11-23 09:58:24.914 280943 INFO nova.compute.claims [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Claim successful on node np0005532584.localdomain#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.028 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.115 280943 INFO oslo.privsep.daemon [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Spawned new privsep daemon via rootwrap#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.010 309561 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.016 309561 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.020 309561 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.020 309561 INFO oslo.privsep.daemon [-] privsep daemon running as pid 309561#033[00m Nov 23 04:58:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v99: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 85 op/s Nov 23 04:58:25 localhost neutron_sriov_agent[255165]: 2025-11-23 09:58:25.326 2 INFO neutron.agent.securitygroups_rpc [None req-ec9f2257-2897-484b-a0ca-c8a73a80ef4d 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Security group member updated ['a3350144-9b09-432b-a32e-ef84bb8bf494']#033[00m Nov 23 04:58:25 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:25.361 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:58:24Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=737e82a6-2634-47df-b8a7-ec21a927cc3f, ip_allocation=immediate, mac_address=fa:16:3e:da:21:74, name=tempest-parent-1925970765, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:58:02Z, description=, dns_domain=, id=d679e465-8656-4403-afa0-724657d33ec4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-49202206-network, port_security_enabled=True, project_id=253c88568a634476a6c1284eed6a9464, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53014, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=483, status=ACTIVE, subnets=['bb39aec6-4f19-4dfe-a775-a545c3d0f74a'], tags=[], tenant_id=253c88568a634476a6c1284eed6a9464, updated_at=2025-11-23T09:58:03Z, vlan_transparent=None, network_id=d679e465-8656-4403-afa0-724657d33ec4, port_security_enabled=True, project_id=253c88568a634476a6c1284eed6a9464, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a3350144-9b09-432b-a32e-ef84bb8bf494'], standard_attr_id=645, status=DOWN, tags=[], tenant_id=253c88568a634476a6c1284eed6a9464, updated_at=2025-11-23T09:58:25Z on network d679e465-8656-4403-afa0-724657d33ec4#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.402 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.403 280943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27d340a7-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.404 280943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27d340a7-60, col_values=(('external_ids', {'iface-id': '27d340a7-60a4-4a73-9f16-bae5ab3411da', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:c3:5c', 'vm-uuid': '76d6f171-13c9-4730-8ed3-ab467ef6831a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.406 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.412 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.414 280943 INFO os_vif [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c3:5c,bridge_name='br-int',has_traffic_filtering=True,id=27d340a7-60a4-4a73-9f16-bae5ab3411da,network=Network(81348c6d-951a-4399-8703-476056b57fe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap27d340a7-60')#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.415 280943 DEBUG nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.416 280943 DEBUG nova.compute.manager [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpbkkhnjeq',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='76d6f171-13c9-4730-8ed3-ab467ef6831a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m Nov 23 04:58:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:58:25 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1353440912' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.460 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.468 280943 DEBUG nova.compute.provider_tree [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.484 280943 DEBUG nova.scheduler.client.report [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.511 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.512 280943 DEBUG nova.compute.manager [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Nov 23 04:58:25 localhost podman[309606]: 2025-11-23 09:58:25.5370734 +0000 UTC m=+0.041184145 container kill 5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d679e465-8656-4403-afa0-724657d33ec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:58:25 localhost dnsmasq[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/addn_hosts - 2 addresses Nov 23 04:58:25 localhost dnsmasq-dhcp[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/host Nov 23 04:58:25 localhost dnsmasq-dhcp[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/opts Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.560 280943 DEBUG nova.compute.manager [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.560 280943 DEBUG nova.network.neutron [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.574 280943 INFO nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.594 280943 DEBUG nova.compute.manager [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.705 280943 DEBUG nova.compute.manager [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.707 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.708 280943 INFO nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Creating image(s)#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.748 280943 DEBUG nova.storage.rbd_utils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] rbd image 1148b5a9-4da9-491f-8952-80c4a965fe6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.786 280943 DEBUG nova.storage.rbd_utils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] rbd image 1148b5a9-4da9-491f-8952-80c4a965fe6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:58:25 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:25.791 262301 INFO neutron.agent.dhcp.agent [None req-06b8bc03-1520-4745-9b42-17ca83be52e7 - - - - - -] DHCP configuration for ports {'737e82a6-2634-47df-b8a7-ec21a927cc3f'} is completed#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.824 280943 DEBUG nova.storage.rbd_utils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] rbd image 1148b5a9-4da9-491f-8952-80c4a965fe6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.829 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Acquiring lock "ba971d9ef74673015953b46ad4dbea47e54dd66a" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.830 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lock "ba971d9ef74673015953b46ad4dbea47e54dd66a" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.872 280943 DEBUG nova.virt.libvirt.imagebackend [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Image locations are: [{'url': 'rbd://46550e70-79cb-5f55-bf6d-1204b97e083b/images/c5806483-57a8-4254-b41b-254b888c8606/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://46550e70-79cb-5f55-bf6d-1204b97e083b/images/c5806483-57a8-4254-b41b-254b888c8606/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.899 280943 WARNING oslo_policy.policy [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.900 280943 WARNING oslo_policy.policy [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Nov 23 04:58:25 localhost nova_compute[280939]: 2025-11-23 09:58:25.904 280943 DEBUG nova.policy [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '492e2909a77a4032ab6c29a26d12fb14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0497de4959b2494e8036eb39226430d6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Nov 23 04:58:26 localhost nova_compute[280939]: 2025-11-23 09:58:26.459 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:26 localhost neutron_sriov_agent[255165]: 2025-11-23 09:58:26.560 2 INFO neutron.agent.securitygroups_rpc [req-dc62ce45-8668-47e6-9d5e-2f0b1764537e req-34d4dcd5-73f6-46e0-ba5e-aabbd18e768e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Security group member updated ['2da1104f-77c5-475e-b21f-e52710edc8b5']#033[00m Nov 23 04:58:26 localhost nova_compute[280939]: 2025-11-23 09:58:26.724 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:26 localhost nova_compute[280939]: 2025-11-23 09:58:26.793 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a.part --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:26 localhost nova_compute[280939]: 2025-11-23 09:58:26.795 280943 DEBUG nova.virt.images [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] c5806483-57a8-4254-b41b-254b888c8606 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m Nov 23 04:58:26 localhost nova_compute[280939]: 2025-11-23 09:58:26.797 280943 DEBUG nova.privsep.utils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Nov 23 04:58:26 localhost nova_compute[280939]: 2025-11-23 09:58:26.797 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a.part /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:26 localhost nova_compute[280939]: 2025-11-23 09:58:26.994 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a.part /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a.converted" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:26 localhost nova_compute[280939]: 2025-11-23 09:58:26.998 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.067 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a.converted --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.068 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lock "ba971d9ef74673015953b46ad4dbea47e54dd66a" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 1.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.104 280943 DEBUG nova.storage.rbd_utils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] rbd image 1148b5a9-4da9-491f-8952-80c4a965fe6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.109 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a 1148b5a9-4da9-491f-8952-80c4a965fe6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.225 280943 DEBUG nova.network.neutron [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Successfully created port: a1846659-6b91-4156-9939-085b30454143 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m Nov 23 04:58:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v100: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 63 op/s Nov 23 04:58:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:58:27 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.696 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a 1148b5a9-4da9-491f-8952-80c4a965fe6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.743 280943 DEBUG nova.network.neutron [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Port 27d340a7-60a4-4a73-9f16-bae5ab3411da updated with migration profile {'migrating_to': 'np0005532584.localdomain'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.745 280943 DEBUG nova.compute.manager [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpbkkhnjeq',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='76d6f171-13c9-4730-8ed3-ab467ef6831a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.807 280943 DEBUG nova.storage.rbd_utils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] resizing rbd image 1148b5a9-4da9-491f-8952-80c4a965fe6b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Nov 23 04:58:27 localhost sshd[309788]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.960 280943 DEBUG nova.network.neutron [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Successfully updated port: a1846659-6b91-4156-9939-085b30454143 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.969 280943 DEBUG nova.objects.instance [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lazy-loading 'migration_context' on Instance uuid 1148b5a9-4da9-491f-8952-80c4a965fe6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.976 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Acquiring lock "refresh_cache-1148b5a9-4da9-491f-8952-80c4a965fe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.977 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Acquired lock "refresh_cache-1148b5a9-4da9-491f-8952-80c4a965fe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.978 280943 DEBUG nova.network.neutron [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.981 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.982 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Ensure instance console log exists: /var/lib/nova/instances/1148b5a9-4da9-491f-8952-80c4a965fe6b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.983 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.983 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:27 localhost nova_compute[280939]: 2025-11-23 09:58:27.984 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:28 localhost systemd[1]: Created slice User Slice of UID 42436. Nov 23 04:58:28 localhost systemd[1]: Starting User Runtime Directory /run/user/42436... Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.027 280943 DEBUG nova.network.neutron [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Nov 23 04:58:28 localhost systemd-logind[760]: New session 72 of user nova. Nov 23 04:58:28 localhost systemd[1]: Finished User Runtime Directory /run/user/42436. Nov 23 04:58:28 localhost systemd[1]: Starting User Manager for UID 42436... Nov 23 04:58:28 localhost systemd[309810]: Queued start job for default target Main User Target. Nov 23 04:58:28 localhost systemd[309810]: Created slice User Application Slice. Nov 23 04:58:28 localhost systemd[309810]: Started Mark boot as successful after the user session has run 2 minutes. Nov 23 04:58:28 localhost systemd[309810]: Started Daily Cleanup of User's Temporary Directories. Nov 23 04:58:28 localhost systemd[309810]: Reached target Paths. Nov 23 04:58:28 localhost systemd[309810]: Reached target Timers. Nov 23 04:58:28 localhost systemd[309810]: Starting D-Bus User Message Bus Socket... Nov 23 04:58:28 localhost systemd[309810]: Starting Create User's Volatile Files and Directories... Nov 23 04:58:28 localhost systemd[309810]: Listening on D-Bus User Message Bus Socket. Nov 23 04:58:28 localhost systemd[309810]: Reached target Sockets. Nov 23 04:58:28 localhost systemd[309810]: Finished Create User's Volatile Files and Directories. Nov 23 04:58:28 localhost systemd[309810]: Reached target Basic System. Nov 23 04:58:28 localhost systemd[309810]: Reached target Main User Target. Nov 23 04:58:28 localhost systemd[309810]: Startup finished in 154ms. Nov 23 04:58:28 localhost systemd[1]: Started User Manager for UID 42436. Nov 23 04:58:28 localhost systemd[1]: Started Session 72 of User nova. Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.381 280943 DEBUG nova.network.neutron [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Updating instance_info_cache with network_info: [{"id": "a1846659-6b91-4156-9939-085b30454143", "address": "fa:16:3e:da:90:40", "network": {"id": "c5d88dfa-0db8-489e-a45a-e843e31a3b26", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1246892017-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "0497de4959b2494e8036eb39226430d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1846659-6b", "ovs_interfaceid": "a1846659-6b91-4156-9939-085b30454143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 23 04:58:28 localhost systemd[1]: Started libvirt secret daemon. Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.411 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Releasing lock "refresh_cache-1148b5a9-4da9-491f-8952-80c4a965fe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.411 280943 DEBUG nova.compute.manager [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Instance network_info: |[{"id": "a1846659-6b91-4156-9939-085b30454143", "address": "fa:16:3e:da:90:40", "network": {"id": "c5d88dfa-0db8-489e-a45a-e843e31a3b26", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1246892017-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "0497de4959b2494e8036eb39226430d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1846659-6b", "ovs_interfaceid": "a1846659-6b91-4156-9939-085b30454143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.416 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Start _get_guest_xml network_info=[{"id": "a1846659-6b91-4156-9939-085b30454143", "address": "fa:16:3e:da:90:40", "network": {"id": "c5d88dfa-0db8-489e-a45a-e843e31a3b26", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1246892017-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "0497de4959b2494e8036eb39226430d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1846659-6b", "ovs_interfaceid": "a1846659-6b91-4156-9939-085b30454143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T09:56:45Z,direct_url=,disk_format='qcow2',id=c5806483-57a8-4254-b41b-254b888c8606,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1915d3e5d4254231a0517e2dcf35848f',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-11-23T09:56:47Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'device_type': 'disk', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'size': 0, 'image_id': 'c5806483-57a8-4254-b41b-254b888c8606'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.422 280943 WARNING nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.425 280943 DEBUG nova.virt.libvirt.host [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Searching host: 'np0005532584.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.426 280943 DEBUG nova.virt.libvirt.host [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.428 280943 DEBUG nova.virt.libvirt.host [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Searching host: 'np0005532584.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.429 280943 DEBUG nova.virt.libvirt.host [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.430 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.430 280943 DEBUG nova.virt.hardware [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T09:56:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43b374b4-75d9-47f9-aa6b-ddb1a45f7c04',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T09:56:45Z,direct_url=,disk_format='qcow2',id=c5806483-57a8-4254-b41b-254b888c8606,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1915d3e5d4254231a0517e2dcf35848f',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-11-23T09:56:47Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.431 280943 DEBUG nova.virt.hardware [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.431 280943 DEBUG nova.virt.hardware [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.432 280943 DEBUG nova.virt.hardware [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.432 280943 DEBUG nova.virt.hardware [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.433 280943 DEBUG nova.virt.hardware [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.433 280943 DEBUG nova.virt.hardware [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.434 280943 DEBUG nova.virt.hardware [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.434 280943 DEBUG nova.virt.hardware [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.435 280943 DEBUG nova.virt.hardware [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.435 280943 DEBUG nova.virt.hardware [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.442 280943 DEBUG nova.privsep.utils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.442 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:28 localhost kernel: tun: Universal TUN/TAP device driver, 1.6 Nov 23 04:58:28 localhost kernel: device tap27d340a7-60 entered promiscuous mode Nov 23 04:58:28 localhost NetworkManager[5966]: [1763891908.4984] manager: (tap27d340a7-60): new Tun device (/org/freedesktop/NetworkManager/Devices/17) Nov 23 04:58:28 localhost ovn_controller[153771]: 2025-11-23T09:58:28Z|00047|binding|INFO|Claiming lport 27d340a7-60a4-4a73-9f16-bae5ab3411da for this additional chassis. Nov 23 04:58:28 localhost ovn_controller[153771]: 2025-11-23T09:58:28Z|00048|binding|INFO|27d340a7-60a4-4a73-9f16-bae5ab3411da: Claiming fa:16:3e:fe:c3:5c 10.100.0.7 Nov 23 04:58:28 localhost ovn_controller[153771]: 2025-11-23T09:58:28Z|00049|binding|INFO|Claiming lport b779be61-5809-44a6-8395-bfdf8254b4cc for this additional chassis. Nov 23 04:58:28 localhost ovn_controller[153771]: 2025-11-23T09:58:28Z|00050|binding|INFO|b779be61-5809-44a6-8395-bfdf8254b4cc: Claiming fa:16:3e:e3:5d:7d 19.80.0.7 Nov 23 04:58:28 localhost systemd-udevd[309860]: Network interface NamePolicy= disabled on kernel command line. Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.510 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:28 localhost NetworkManager[5966]: [1763891908.5223] device (tap27d340a7-60): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Nov 23 04:58:28 localhost NetworkManager[5966]: [1763891908.5245] device (tap27d340a7-60): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Nov 23 04:58:28 localhost ovn_controller[153771]: 2025-11-23T09:58:28Z|00051|binding|INFO|Setting lport 27d340a7-60a4-4a73-9f16-bae5ab3411da ovn-installed in OVS Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.544 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:28 localhost systemd-machined[202731]: New machine qemu-1-instance-00000006. Nov 23 04:58:28 localhost systemd[1]: Started Virtual Machine qemu-1-instance-00000006. Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.613 280943 DEBUG nova.compute.manager [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Received event network-changed-a1846659-6b91-4156-9939-085b30454143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.615 280943 DEBUG nova.compute.manager [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Refreshing instance network info cache due to event network-changed-a1846659-6b91-4156-9939-085b30454143. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.615 280943 DEBUG oslo_concurrency.lockutils [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "refresh_cache-1148b5a9-4da9-491f-8952-80c4a965fe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.616 280943 DEBUG oslo_concurrency.lockutils [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquired lock "refresh_cache-1148b5a9-4da9-491f-8952-80c4a965fe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.616 280943 DEBUG nova.network.neutron [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Refreshing network info cache for port a1846659-6b91-4156-9939-085b30454143 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.871 280943 DEBUG nova.virt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.872 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] VM Started (Lifecycle Event)#033[00m Nov 23 04:58:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 04:58:28 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2236451100' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.891 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.929 280943 DEBUG nova.storage.rbd_utils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] rbd image 1148b5a9-4da9-491f-8952-80c4a965fe6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.934 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:28 localhost nova_compute[280939]: 2025-11-23 09:58:28.955 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.079 280943 DEBUG nova.network.neutron [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Updated VIF entry in instance network info cache for port a1846659-6b91-4156-9939-085b30454143. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.080 280943 DEBUG nova.network.neutron [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Updating instance_info_cache with network_info: [{"id": "a1846659-6b91-4156-9939-085b30454143", "address": "fa:16:3e:da:90:40", "network": {"id": "c5d88dfa-0db8-489e-a45a-e843e31a3b26", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1246892017-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "0497de4959b2494e8036eb39226430d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1846659-6b", "ovs_interfaceid": "a1846659-6b91-4156-9939-085b30454143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.095 280943 DEBUG oslo_concurrency.lockutils [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Releasing lock "refresh_cache-1148b5a9-4da9-491f-8952-80c4a965fe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 23 04:58:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v101: 177 pgs: 177 active+clean; 238 MiB data, 838 MiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 106 op/s Nov 23 04:58:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 04:58:29 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2868171599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.331 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.332 280943 DEBUG nova.virt.libvirt.vif [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T09:58:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005532584.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=8,image_ref='c5806483-57a8-4254-b41b-254b888c8606',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7mkCBPEi7Dn/CBb8dKZmrfYWwMHpR6NvmRrgxeBvUuyX/aX8ONpvOK4sr/zvPyTz4T6NWXcMIu46JjJEnGSD+WDnEZHOWGkiVTo1TEgHUJg/fGAuwlF+wJ6Nu4MyBm5w==',key_name='tempest-keypair-974278285',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005532584.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005532584.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0497de4959b2494e8036eb39226430d6',ramdisk_id='',reservation_id='r-cm4mi548',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c5806483-57a8-4254-b41b-254b888c8606',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-1110187156',owner_user_name='tempest-ServersV294TestFqdnHostnames-1110187156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T09:58:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='492e2909a77a4032ab6c29a26d12fb14',uuid=1148b5a9-4da9-491f-8952-80c4a965fe6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1846659-6b91-4156-9939-085b30454143", "address": "fa:16:3e:da:90:40", "network": {"id": "c5d88dfa-0db8-489e-a45a-e843e31a3b26", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1246892017-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "0497de4959b2494e8036eb39226430d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1846659-6b", "ovs_interfaceid": "a1846659-6b91-4156-9939-085b30454143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.333 280943 DEBUG nova.network.os_vif_util [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Converting VIF {"id": "a1846659-6b91-4156-9939-085b30454143", "address": "fa:16:3e:da:90:40", "network": {"id": "c5d88dfa-0db8-489e-a45a-e843e31a3b26", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1246892017-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "0497de4959b2494e8036eb39226430d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1846659-6b", "ovs_interfaceid": "a1846659-6b91-4156-9939-085b30454143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.334 280943 DEBUG nova.network.os_vif_util [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:90:40,bridge_name='br-int',has_traffic_filtering=True,id=a1846659-6b91-4156-9939-085b30454143,network=Network(c5d88dfa-0db8-489e-a45a-e843e31a3b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1846659-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.336 280943 DEBUG nova.objects.instance [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1148b5a9-4da9-491f-8952-80c4a965fe6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.356 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] End _get_guest_xml xml= Nov 23 04:58:29 localhost nova_compute[280939]: 1148b5a9-4da9-491f-8952-80c4a965fe6b Nov 23 04:58:29 localhost nova_compute[280939]: instance-00000008 Nov 23 04:58:29 localhost nova_compute[280939]: 131072 Nov 23 04:58:29 localhost nova_compute[280939]: 1 Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: guest-instance-1 Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:28 Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: 128 Nov 23 04:58:29 localhost nova_compute[280939]: 1 Nov 23 04:58:29 localhost nova_compute[280939]: 0 Nov 23 04:58:29 localhost nova_compute[280939]: 0 Nov 23 04:58:29 localhost nova_compute[280939]: 1 Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: tempest-ServersV294TestFqdnHostnames-1110187156-project-member Nov 23 04:58:29 localhost nova_compute[280939]: tempest-ServersV294TestFqdnHostnames-1110187156 Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: RDO Nov 23 04:58:29 localhost nova_compute[280939]: OpenStack Compute Nov 23 04:58:29 localhost nova_compute[280939]: 27.5.2-0.20250829104910.6f8decf.el9 Nov 23 04:58:29 localhost nova_compute[280939]: 1148b5a9-4da9-491f-8952-80c4a965fe6b Nov 23 04:58:29 localhost nova_compute[280939]: 1148b5a9-4da9-491f-8952-80c4a965fe6b Nov 23 04:58:29 localhost nova_compute[280939]: Virtual Machine Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: hvm Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: /dev/urandom Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: Nov 23 04:58:29 localhost nova_compute[280939]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.358 280943 DEBUG nova.compute.manager [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Preparing to wait for external event network-vif-plugged-a1846659-6b91-4156-9939-085b30454143 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.359 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Acquiring lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.360 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.360 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.362 280943 DEBUG nova.virt.libvirt.vif [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T09:58:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005532584.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=8,image_ref='c5806483-57a8-4254-b41b-254b888c8606',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7mkCBPEi7Dn/CBb8dKZmrfYWwMHpR6NvmRrgxeBvUuyX/aX8ONpvOK4sr/zvPyTz4T6NWXcMIu46JjJEnGSD+WDnEZHOWGkiVTo1TEgHUJg/fGAuwlF+wJ6Nu4MyBm5w==',key_name='tempest-keypair-974278285',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005532584.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005532584.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0497de4959b2494e8036eb39226430d6',ramdisk_id='',reservation_id='r-cm4mi548',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c5806483-57a8-4254-b41b-254b888c8606',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-1110187156',owner_user_name='tempest-ServersV294TestFqdnHostnames-1110187156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T09:58:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='492e2909a77a4032ab6c29a26d12fb14',uuid=1148b5a9-4da9-491f-8952-80c4a965fe6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1846659-6b91-4156-9939-085b30454143", "address": "fa:16:3e:da:90:40", "network": {"id": "c5d88dfa-0db8-489e-a45a-e843e31a3b26", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1246892017-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "0497de4959b2494e8036eb39226430d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1846659-6b", "ovs_interfaceid": "a1846659-6b91-4156-9939-085b30454143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.362 280943 DEBUG nova.network.os_vif_util [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Converting VIF {"id": "a1846659-6b91-4156-9939-085b30454143", "address": "fa:16:3e:da:90:40", "network": {"id": "c5d88dfa-0db8-489e-a45a-e843e31a3b26", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1246892017-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "0497de4959b2494e8036eb39226430d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1846659-6b", "ovs_interfaceid": "a1846659-6b91-4156-9939-085b30454143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.364 280943 DEBUG nova.network.os_vif_util [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:90:40,bridge_name='br-int',has_traffic_filtering=True,id=a1846659-6b91-4156-9939-085b30454143,network=Network(c5d88dfa-0db8-489e-a45a-e843e31a3b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1846659-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.364 280943 DEBUG os_vif [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:90:40,bridge_name='br-int',has_traffic_filtering=True,id=a1846659-6b91-4156-9939-085b30454143,network=Network(c5d88dfa-0db8-489e-a45a-e843e31a3b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1846659-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.366 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.366 280943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.367 280943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.372 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.372 280943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1846659-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.373 280943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa1846659-6b, col_values=(('external_ids', {'iface-id': 'a1846659-6b91-4156-9939-085b30454143', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:90:40', 'vm-uuid': '1148b5a9-4da9-491f-8952-80c4a965fe6b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.413 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.417 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.419 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.420 280943 INFO os_vif [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:90:40,bridge_name='br-int',has_traffic_filtering=True,id=a1846659-6b91-4156-9939-085b30454143,network=Network(c5d88dfa-0db8-489e-a45a-e843e31a3b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1846659-6b')#033[00m Nov 23 04:58:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.499 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.500 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.500 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] No VIF found with MAC fa:16:3e:da:90:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.501 280943 INFO nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Using config drive#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.539 280943 DEBUG nova.storage.rbd_utils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] rbd image 1148b5a9-4da9-491f-8952-80c4a965fe6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:58:29 localhost systemd[1]: tmp-crun.dmPem1.mount: Deactivated successfully. Nov 23 04:58:29 localhost podman[309979]: 2025-11-23 09:58:29.565147742 +0000 UTC m=+0.104465219 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, release=1755695350, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_id=edpm, com.redhat.component=ubi9-minimal-container) Nov 23 04:58:29 localhost podman[309979]: 2025-11-23 09:58:29.610487294 +0000 UTC m=+0.149804761 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, io.openshift.tags=minimal rhel9, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 04:58:29 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.724 280943 DEBUG nova.virt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.725 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] VM Resumed (Lifecycle Event)#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.733 280943 INFO nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Creating config drive at /var/lib/nova/instances/1148b5a9-4da9-491f-8952-80c4a965fe6b/disk.config#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.742 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1148b5a9-4da9-491f-8952-80c4a965fe6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn3r4d9r1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.760 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.765 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.781 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] During the sync_power process the instance has moved from host np0005532586.localdomain to host np0005532584.localdomain#033[00m Nov 23 04:58:29 localhost neutron_sriov_agent[255165]: 2025-11-23 09:58:29.792 2 INFO neutron.agent.securitygroups_rpc [None req-2d14bee2-d335-42d8-9b8c-ccacbe55654b 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Security group member updated ['a3350144-9b09-432b-a32e-ef84bb8bf494']#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.870 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1148b5a9-4da9-491f-8952-80c4a965fe6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn3r4d9r1" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.932 280943 DEBUG nova.storage.rbd_utils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] rbd image 1148b5a9-4da9-491f-8952-80c4a965fe6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:58:29 localhost nova_compute[280939]: 2025-11-23 09:58:29.938 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1148b5a9-4da9-491f-8952-80c4a965fe6b/disk.config 1148b5a9-4da9-491f-8952-80c4a965fe6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:30 localhost systemd[1]: session-72.scope: Deactivated successfully. Nov 23 04:58:30 localhost systemd-logind[760]: Session 72 logged out. Waiting for processes to exit. Nov 23 04:58:30 localhost systemd-logind[760]: Removed session 72. Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.297 280943 DEBUG oslo_concurrency.processutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1148b5a9-4da9-491f-8952-80c4a965fe6b/disk.config 1148b5a9-4da9-491f-8952-80c4a965fe6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.298 280943 INFO nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Deleting local config drive /var/lib/nova/instances/1148b5a9-4da9-491f-8952-80c4a965fe6b/disk.config because it was imported into RBD.#033[00m Nov 23 04:58:30 localhost kernel: device tapa1846659-6b entered promiscuous mode Nov 23 04:58:30 localhost NetworkManager[5966]: [1763891910.3513] manager: (tapa1846659-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/18) Nov 23 04:58:30 localhost systemd-udevd[309861]: Network interface NamePolicy= disabled on kernel command line. Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.355 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:30 localhost ovn_controller[153771]: 2025-11-23T09:58:30Z|00052|binding|INFO|Claiming lport a1846659-6b91-4156-9939-085b30454143 for this chassis. Nov 23 04:58:30 localhost ovn_controller[153771]: 2025-11-23T09:58:30Z|00053|binding|INFO|a1846659-6b91-4156-9939-085b30454143: Claiming fa:16:3e:da:90:40 10.100.0.12 Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.368 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:30 localhost NetworkManager[5966]: [1763891910.3719] device (tapa1846659-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Nov 23 04:58:30 localhost NetworkManager[5966]: [1763891910.3738] device (tapa1846659-6b): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Nov 23 04:58:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:30.384 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:90:40 10.100.0.12'], port_security=['fa:16:3e:da:90:40 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1148b5a9-4da9-491f-8952-80c4a965fe6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5d88dfa-0db8-489e-a45a-e843e31a3b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0497de4959b2494e8036eb39226430d6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2da1104f-77c5-475e-b21f-e52710edc8b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=54e00d1b-ba48-40e5-8228-7e38f918fa79, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=a1846659-6b91-4156-9939-085b30454143) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:58:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:30.386 159415 INFO neutron.agent.ovn.metadata.agent [-] Port a1846659-6b91-4156-9939-085b30454143 in datapath c5d88dfa-0db8-489e-a45a-e843e31a3b26 bound to our chassis#033[00m Nov 23 04:58:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:30.389 159415 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5d88dfa-0db8-489e-a45a-e843e31a3b26#033[00m Nov 23 04:58:30 localhost systemd-machined[202731]: New machine qemu-2-instance-00000008. Nov 23 04:58:30 localhost ovn_controller[153771]: 2025-11-23T09:58:30Z|00054|binding|INFO|Setting lport a1846659-6b91-4156-9939-085b30454143 ovn-installed in OVS Nov 23 04:58:30 localhost ovn_controller[153771]: 2025-11-23T09:58:30Z|00055|binding|INFO|Setting lport a1846659-6b91-4156-9939-085b30454143 up in Southbound Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.408 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:30 localhost systemd[1]: Started Virtual Machine qemu-2-instance-00000008. Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.727 280943 DEBUG nova.virt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.728 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] VM Started (Lifecycle Event)#033[00m Nov 23 04:58:30 localhost ovn_controller[153771]: 2025-11-23T09:58:30Z|00056|binding|INFO|Claiming lport 27d340a7-60a4-4a73-9f16-bae5ab3411da for this chassis. Nov 23 04:58:30 localhost ovn_controller[153771]: 2025-11-23T09:58:30Z|00057|binding|INFO|27d340a7-60a4-4a73-9f16-bae5ab3411da: Claiming fa:16:3e:fe:c3:5c 10.100.0.7 Nov 23 04:58:30 localhost ovn_controller[153771]: 2025-11-23T09:58:30Z|00058|binding|INFO|Claiming lport b779be61-5809-44a6-8395-bfdf8254b4cc for this chassis. Nov 23 04:58:30 localhost ovn_controller[153771]: 2025-11-23T09:58:30Z|00059|binding|INFO|b779be61-5809-44a6-8395-bfdf8254b4cc: Claiming fa:16:3e:e3:5d:7d 19.80.0.7 Nov 23 04:58:30 localhost ovn_controller[153771]: 2025-11-23T09:58:30Z|00060|binding|INFO|Setting lport 27d340a7-60a4-4a73-9f16-bae5ab3411da up in Southbound Nov 23 04:58:30 localhost ovn_controller[153771]: 2025-11-23T09:58:30Z|00061|binding|INFO|Setting lport b779be61-5809-44a6-8395-bfdf8254b4cc up in Southbound Nov 23 04:58:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:30.741 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:5d:7d 19.80.0.7'], port_security=['fa:16:3e:e3:5d:7d 19.80.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['27d340a7-60a4-4a73-9f16-bae5ab3411da'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-711090127', 'neutron:cidrs': '19.80.0.7/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-711090127', 'neutron:project_id': 'a2148c18d8f24a6db12dc22c787e8b2e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ff44a28d-1e1f-4163-b206-fdf77022bf0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=0e3b2035-d1e3-4dc9-824d-c8c5d8c83090, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=b779be61-5809-44a6-8395-bfdf8254b4cc) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:58:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:30.744 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:c3:5c 10.100.0.7'], port_security=['fa:16:3e:fe:c3:5c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-2092561411', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '76d6f171-13c9-4730-8ed3-ab467ef6831a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81348c6d-951a-4399-8703-476056b57fe9', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-2092561411', 'neutron:project_id': 'a2148c18d8f24a6db12dc22c787e8b2e', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'ff44a28d-1e1f-4163-b206-fdf77022bf0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1897b64f-0c37-45be-8353-f858f64309cd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=27d340a7-60a4-4a73-9f16-bae5ab3411da) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.757 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.762 280943 DEBUG nova.virt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.763 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] VM Paused (Lifecycle Event)#033[00m Nov 23 04:58:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:30.781 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[d17e7400-d3ee-44a5-97c4-334fa3f11a50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:30.782 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc5d88dfa-01 in ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.783 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:58:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:30.784 308301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc5d88dfa-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Nov 23 04:58:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:30.784 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[49ef2051-df0e-4e73-944d-89adde8b9aa1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:30.785 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[b120fd83-6674-4477-9827-f07ba8f51b1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.792 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Nov 23 04:58:30 localhost neutron_sriov_agent[255165]: 2025-11-23 09:58:30.796 2 WARNING neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [req-d94990a3-adc9-4b79-97fb-3249301b5664 req-34a40a99-6f48-4d57-9a4c-273e1720f62f 73d8249924dd406db12ad13a4ddb31a1 758f3043280349e086a85b86f2668848 - - default default] This port is not SRIOV, skip binding for port 27d340a7-60a4-4a73-9f16-bae5ab3411da.#033[00m Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.810 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Nov 23 04:58:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:30.812 159521 DEBUG oslo.privsep.daemon [-] privsep: reply[88aab2ea-380f-4b72-84cc-a8082adfd657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:30.822 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[689722b3-4dec-4bb2-afc4-d657ca1f15a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:30.824 159415 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpdxmp3ww1/privsep.sock']#033[00m Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.899 280943 INFO nova.compute.manager [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Post operation of migration started#033[00m Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.995 280943 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Acquiring lock "refresh_cache-76d6f171-13c9-4730-8ed3-ab467ef6831a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.996 280943 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Acquired lock "refresh_cache-76d6f171-13c9-4730-8ed3-ab467ef6831a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 23 04:58:30 localhost nova_compute[280939]: 2025-11-23 09:58:30.996 280943 DEBUG nova.network.neutron [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Nov 23 04:58:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v102: 177 pgs: 177 active+clean; 238 MiB data, 838 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 69 op/s Nov 23 04:58:31 localhost nova_compute[280939]: 2025-11-23 09:58:31.382 280943 DEBUG nova.network.neutron [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Updating instance_info_cache with network_info: [{"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 23 04:58:31 localhost nova_compute[280939]: 2025-11-23 09:58:31.406 280943 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Releasing lock "refresh_cache-76d6f171-13c9-4730-8ed3-ab467ef6831a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 23 04:58:31 localhost nova_compute[280939]: 2025-11-23 09:58:31.420 280943 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:31 localhost nova_compute[280939]: 2025-11-23 09:58:31.420 280943 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:31 localhost nova_compute[280939]: 2025-11-23 09:58:31.420 280943 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:31 localhost nova_compute[280939]: 2025-11-23 09:58:31.424 280943 INFO nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m Nov 23 04:58:31 localhost journal[229251]: Domain id=1 name='instance-00000006' uuid=76d6f171-13c9-4730-8ed3-ab467ef6831a is tainted: custom-monitor Nov 23 04:58:31 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:31.439 159415 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Nov 23 04:58:31 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:31.440 159415 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpdxmp3ww1/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Nov 23 04:58:31 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:31.330 310132 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Nov 23 04:58:31 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:31.336 310132 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Nov 23 04:58:31 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:31.340 310132 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Nov 23 04:58:31 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:31.340 310132 INFO oslo.privsep.daemon [-] privsep daemon running as pid 310132#033[00m Nov 23 04:58:31 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:31.443 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[ab03ebea-1a76-438e-b102-768f5e201ad9]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:31 localhost nova_compute[280939]: 2025-11-23 09:58:31.461 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:31 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:31.852 310132 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:31 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:31.852 310132 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:31 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:31.852 310132 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.380 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[920d38e7-aaa5-4d93-b69c-eaf36e16bda2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:32 localhost NetworkManager[5966]: [1763891912.4045] manager: (tapc5d88dfa-00): new Veth device (/org/freedesktop/NetworkManager/Devices/19) Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.403 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[16701e2a-d799-4ee0-a215-81a71f35800e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.427 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[57ea83e3-80f1-4ca5-9982-aca059d9482e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.429 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[82497fe6-1ae6-4a9e-af0a-63aabca67132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:32 localhost nova_compute[280939]: 2025-11-23 09:58:32.431 280943 INFO nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m Nov 23 04:58:32 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapc5d88dfa-01: link becomes ready Nov 23 04:58:32 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapc5d88dfa-00: link becomes ready Nov 23 04:58:32 localhost NetworkManager[5966]: [1763891912.4481] device (tapc5d88dfa-00): carrier: link connected Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.451 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[dd30f251-6595-45df-b1a7-dc56c31c3149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.471 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[b115819c-b8d1-4c93-bdcb-831bfd0d7d22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5d88dfa-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0f:c0:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1184885, 'reachable_time': 15498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310159, 'error': None, 'target': 'ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.490 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[b7136141-e1f4-4c75-8986-6cd089d79b60]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:c05d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1184885, 'tstamp': 1184885}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310160, 'error': None, 'target': 'ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.515 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[301a9055-b817-47c1-9d2a-b05b726658ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5d88dfa-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0f:c0:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1184885, 'reachable_time': 15498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310161, 'error': None, 'target': 'ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.547 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[97e9303e-c336-43d2-8c4f-066be814bd7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.602 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[10dddb2a-2755-4e3f-b073-0b559259a91c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.604 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5d88dfa-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.604 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.606 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5d88dfa-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:32 localhost nova_compute[280939]: 2025-11-23 09:58:32.609 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:32 localhost kernel: device tapc5d88dfa-00 entered promiscuous mode Nov 23 04:58:32 localhost nova_compute[280939]: 2025-11-23 09:58:32.613 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.615 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5d88dfa-00, col_values=(('external_ids', {'iface-id': 'a8a61203-fe2e-4005-bcf2-6150709eadea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:32 localhost nova_compute[280939]: 2025-11-23 09:58:32.616 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:32 localhost ovn_controller[153771]: 2025-11-23T09:58:32Z|00062|binding|INFO|Releasing lport a8a61203-fe2e-4005-bcf2-6150709eadea from this chassis (sb_readonly=0) Nov 23 04:58:32 localhost nova_compute[280939]: 2025-11-23 09:58:32.626 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:32 localhost nova_compute[280939]: 2025-11-23 09:58:32.629 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.630 159415 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c5d88dfa-0db8-489e-a45a-e843e31a3b26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c5d88dfa-0db8-489e-a45a-e843e31a3b26.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.631 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[02767bb9-bb95-42ab-a7c8-1f2d8e56a79f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.633 159415 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: global Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: log /dev/log local0 debug Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: log-tag haproxy-metadata-proxy-c5d88dfa-0db8-489e-a45a-e843e31a3b26 Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: user root Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: group root Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: maxconn 1024 Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: pidfile /var/lib/neutron/external/pids/c5d88dfa-0db8-489e-a45a-e843e31a3b26.pid.haproxy Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: daemon Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: defaults Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: log global Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: mode http Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: option httplog Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: option dontlognull Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: option http-server-close Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: option forwardfor Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: retries 3 Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: timeout http-request 30s Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: timeout connect 30s Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: timeout client 32s Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: timeout server 32s Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: timeout http-keep-alive 30s Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: listen listener Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: bind 169.254.169.254:80 Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: server metadata /var/lib/neutron/metadata_proxy Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: http-request add-header X-OVN-Network-ID c5d88dfa-0db8-489e-a45a-e843e31a3b26 Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Nov 23 04:58:32 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:32.634 159415 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26', 'env', 'PROCESS_TAG=haproxy-c5d88dfa-0db8-489e-a45a-e843e31a3b26', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c5d88dfa-0db8-489e-a45a-e843e31a3b26.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Nov 23 04:58:32 localhost nova_compute[280939]: 2025-11-23 09:58:32.715 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Acquiring lock "8f62292f-5719-4b19-9188-3715b94493a7" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:32 localhost nova_compute[280939]: 2025-11-23 09:58:32.715 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:32 localhost nova_compute[280939]: 2025-11-23 09:58:32.734 280943 DEBUG nova.compute.manager [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Nov 23 04:58:32 localhost ovn_controller[153771]: 2025-11-23T09:58:32Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:c3:5c 10.100.0.7 Nov 23 04:58:32 localhost ovn_controller[153771]: 2025-11-23T09:58:32Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:c3:5c 10.100.0.7 Nov 23 04:58:32 localhost nova_compute[280939]: 2025-11-23 09:58:32.835 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:32 localhost nova_compute[280939]: 2025-11-23 09:58:32.836 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:32 localhost nova_compute[280939]: 2025-11-23 09:58:32.840 280943 DEBUG nova.virt.hardware [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Nov 23 04:58:32 localhost nova_compute[280939]: 2025-11-23 09:58:32.841 280943 INFO nova.compute.claims [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Claim successful on node np0005532584.localdomain#033[00m Nov 23 04:58:32 localhost nova_compute[280939]: 2025-11-23 09:58:32.979 280943 DEBUG oslo_concurrency.processutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:33 localhost podman[310194]: Nov 23 04:58:33 localhost podman[310194]: 2025-11-23 09:58:33.066685707 +0000 UTC m=+0.087298021 container create 2c7bdb8727a12b8b13822841b8ee719d77f4d8184b3486c0369f85a323aff9fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:58:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:58:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:58:33 localhost systemd[1]: Started libpod-conmon-2c7bdb8727a12b8b13822841b8ee719d77f4d8184b3486c0369f85a323aff9fc.scope. Nov 23 04:58:33 localhost podman[310194]: 2025-11-23 09:58:33.024139301 +0000 UTC m=+0.044751655 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Nov 23 04:58:33 localhost systemd[1]: Started libcrun container. Nov 23 04:58:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bb9ab892f16eabbcae1078842070edcaf0dbe52499824bd79be78f3af4a965/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 04:58:33 localhost podman[310194]: 2025-11-23 09:58:33.148803289 +0000 UTC m=+0.169415583 container init 2c7bdb8727a12b8b13822841b8ee719d77f4d8184b3486c0369f85a323aff9fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:58:33 localhost podman[310194]: 2025-11-23 09:58:33.160862969 +0000 UTC m=+0.181475253 container start 2c7bdb8727a12b8b13822841b8ee719d77f4d8184b3486c0369f85a323aff9fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 23 04:58:33 localhost neutron-haproxy-ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26[310237]: [NOTICE] (310255) : New worker (310270) forked Nov 23 04:58:33 localhost neutron-haproxy-ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26[310237]: [NOTICE] (310255) : Loading success. Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.209 159415 INFO neutron.agent.ovn.metadata.agent [-] Port b779be61-5809-44a6-8395-bfdf8254b4cc in datapath 8cd987c4-7e4e-467f-9ee2-d70cb75b87c3 unbound from our chassis#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.212 159415 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8cd987c4-7e4e-467f-9ee2-d70cb75b87c3#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.221 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[1e71a026-e8db-449f-9e70-6d92017b2f7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.222 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8cd987c4-71 in ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.223 308301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8cd987c4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.224 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[60c7d4a1-fc27-4529-8e65-665711e6f170]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.225 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[a34d80a8-a25e-40f1-a48d-81a9eac0faea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:33 localhost podman[310210]: 2025-11-23 09:58:33.23318085 +0000 UTC m=+0.129682013 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:58:33 localhost podman[310210]: 2025-11-23 09:58:33.243160716 +0000 UTC m=+0.139661929 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.244 159521 DEBUG oslo.privsep.daemon [-] privsep: reply[b3eb9a7f-c043-4b06-ab69-38f918d79da6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:33 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.258 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[475cb85d-16d4-41ac-907f-8df83d42b49f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.287 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[9399bbc7-1d4b-44b7-b8e4-b9a6ddd2de04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:33 localhost systemd-udevd[310150]: Network interface NamePolicy= disabled on kernel command line. Nov 23 04:58:33 localhost NetworkManager[5966]: [1763891913.2965] manager: (tap8cd987c4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/20) Nov 23 04:58:33 localhost podman[310208]: 2025-11-23 09:58:33.297109783 +0000 UTC m=+0.193275865 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.299 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[1f31a5f1-314f-4876-aede-504bedb9335b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v103: 177 pgs: 177 active+clean; 238 MiB data, 838 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 69 op/s Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.321 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b2148a-1ae1-415c-8b62-ad3cbfcced09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.324 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce395af-3fb1-4765-a33d-bc842886123b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:33 localhost podman[310208]: 2025-11-23 09:58:33.338398 +0000 UTC m=+0.234564072 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm) Nov 23 04:58:33 localhost NetworkManager[5966]: [1763891913.3437] device (tap8cd987c4-70): carrier: link connected Nov 23 04:58:33 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap8cd987c4-70: link becomes ready Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.347 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[ca889229-9b1d-4dd3-9747-0c1902a8886c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:33 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.362 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[0368031b-40e8-408b-ba1b-45c585fbc98f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8cd987c4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:fe:e5:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1184975, 'reachable_time': 25596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310331, 'error': None, 'target': 'ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.376 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[1eaa5768-922e-481d-9a4d-23cefbfa5793]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:e5b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1184975, 'tstamp': 1184975}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310332, 'error': None, 'target': 'ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.389 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[e2f7c31b-eab6-4790-be1d-4cff9f52f150]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8cd987c4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:fe:e5:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1184975, 'reachable_time': 25596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310333, 'error': None, 'target': 'ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.414 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[c665b472-4c4c-4ddb-a6b7-7a4a9980db7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.439 280943 INFO nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.449 280943 DEBUG nova.compute.manager [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0. Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:58:33.456340) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43 Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891913456379, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 2210, "num_deletes": 251, "total_data_size": 2473230, "memory_usage": 2519520, "flush_reason": "Manual Compaction"} Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.461 280943 DEBUG oslo_concurrency.processutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.466 280943 DEBUG nova.compute.provider_tree [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Updating inventory in ProviderTree for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891913468466, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 2394639, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23383, "largest_seqno": 25592, "table_properties": {"data_size": 2385727, "index_size": 5545, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19315, "raw_average_key_size": 21, "raw_value_size": 2367399, "raw_average_value_size": 2576, "num_data_blocks": 239, "num_entries": 919, "num_filter_entries": 919, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891729, "oldest_key_time": 1763891729, "file_creation_time": 1763891913, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}} Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 12510 microseconds, and 6443 cpu microseconds. Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:58:33.468849) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 2394639 bytes OK Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:58:33.468971) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:58:33.470877) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:58:33.470898) EVENT_LOG_v1 {"time_micros": 1763891913470891, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:58:33.470918) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2464025, prev total WAL file size 2464025, number of live WAL files 2. Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:58:33.472326) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end) Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(2338KB)], [42(15MB)] Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891913472374, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 18833501, "oldest_snapshot_seqno": -1} Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.474 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[95dede1b-af0e-48d8-b0cb-51b248472a4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.475 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8cd987c4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.476 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.476 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8cd987c4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.515 280943 DEBUG nova.objects.instance [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.523 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:33 localhost kernel: device tap8cd987c4-70 entered promiscuous mode Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.528 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8cd987c4-70, col_values=(('external_ids', {'iface-id': '6df03061-a46e-4f2d-b42f-4f149f759e31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:33 localhost ovn_controller[153771]: 2025-11-23T09:58:33Z|00063|binding|INFO|Releasing lport 6df03061-a46e-4f2d-b42f-4f149f759e31 from this chassis (sb_readonly=0) Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.528 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.541 159415 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8cd987c4-7e4e-467f-9ee2-d70cb75b87c3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8cd987c4-7e4e-467f-9ee2-d70cb75b87c3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.542 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[28c4435a-f923-4987-b09a-e1463cf42b56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.543 159415 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: global Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: log /dev/log local0 debug Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: log-tag haproxy-metadata-proxy-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3 Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: user root Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: group root Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: maxconn 1024 Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: pidfile /var/lib/neutron/external/pids/8cd987c4-7e4e-467f-9ee2-d70cb75b87c3.pid.haproxy Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: daemon Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: defaults Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: log global Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: mode http Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: option httplog Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: option dontlognull Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: option http-server-close Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: option forwardfor Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: retries 3 Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: timeout http-request 30s Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: timeout connect 30s Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: timeout client 32s Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: timeout server 32s Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: timeout http-keep-alive 30s Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: listen listener Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: bind 169.254.169.254:80 Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: server metadata /var/lib/neutron/metadata_proxy Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: http-request add-header X-OVN-Network-ID 8cd987c4-7e4e-467f-9ee2-d70cb75b87c3 Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Nov 23 04:58:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:33.543 159415 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3', 'env', 'PROCESS_TAG=haproxy-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8cd987c4-7e4e-467f-9ee2-d70cb75b87c3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.546 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.551 280943 ERROR nova.scheduler.client.report [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [req-763e31d6-8b73-4d72-af8d-0abb11798c17] Failed to update inventory to [{'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}}] for resource provider with UUID c90c5769-42ab-40e9-92fc-3d82b4e96052. Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict ", "code": "placement.concurrent_update", "request_id": "req-763e31d6-8b73-4d72-af8d-0abb11798c17"}]}#033[00m Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 12165 keys, 16882819 bytes, temperature: kUnknown Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891913565241, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 16882819, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16814894, "index_size": 36400, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30469, "raw_key_size": 328822, "raw_average_key_size": 27, "raw_value_size": 16608747, "raw_average_value_size": 1365, "num_data_blocks": 1365, "num_entries": 12165, "num_filter_entries": 12165, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763891913, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}} Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:58:33.565397) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 16882819 bytes Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:58:33.566987) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.7 rd, 181.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 15.7 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(14.9) write-amplify(7.1) OK, records in: 12697, records dropped: 532 output_compression: NoCompression Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:58:33.567015) EVENT_LOG_v1 {"time_micros": 1763891913567006, "job": 24, "event": "compaction_finished", "compaction_time_micros": 92913, "compaction_time_cpu_micros": 24987, "output_level": 6, "num_output_files": 1, "total_output_size": 16882819, "num_input_records": 12697, "num_output_records": 12165, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891913567268, "job": 24, "event": "table_file_deletion", "file_number": 44} Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891913568411, "job": 24, "event": "table_file_deletion", "file_number": 42} Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:58:33.472283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:58:33.568432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:58:33.568435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:58:33.568436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:58:33.568438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:58:33 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-09:58:33.568439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.593 280943 DEBUG nova.scheduler.client.report [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Refreshing inventories for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.619 280943 DEBUG nova.scheduler.client.report [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Updating ProviderTree inventory for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.620 280943 DEBUG nova.compute.provider_tree [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Updating inventory in ProviderTree for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.650 280943 DEBUG nova.scheduler.client.report [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Refreshing aggregate associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.675 280943 DEBUG nova.scheduler.client.report [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Refreshing trait associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.771 280943 DEBUG oslo_concurrency.processutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.924 280943 DEBUG nova.compute.manager [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Received event network-vif-plugged-a1846659-6b91-4156-9939-085b30454143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.925 280943 DEBUG oslo_concurrency.lockutils [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.925 280943 DEBUG oslo_concurrency.lockutils [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.925 280943 DEBUG oslo_concurrency.lockutils [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.925 280943 DEBUG nova.compute.manager [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Processing event network-vif-plugged-a1846659-6b91-4156-9939-085b30454143 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.925 280943 DEBUG nova.compute.manager [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Received event network-vif-plugged-a1846659-6b91-4156-9939-085b30454143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.925 280943 DEBUG oslo_concurrency.lockutils [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.926 280943 DEBUG oslo_concurrency.lockutils [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.926 280943 DEBUG oslo_concurrency.lockutils [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.926 280943 DEBUG nova.compute.manager [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] No waiting events found dispatching network-vif-plugged-a1846659-6b91-4156-9939-085b30454143 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.926 280943 WARNING nova.compute.manager [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Received unexpected event network-vif-plugged-a1846659-6b91-4156-9939-085b30454143 for instance with vm_state building and task_state spawning.#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.926 280943 DEBUG nova.compute.manager [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received event network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.926 280943 DEBUG oslo_concurrency.lockutils [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.927 280943 DEBUG oslo_concurrency.lockutils [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.927 280943 DEBUG oslo_concurrency.lockutils [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.927 280943 DEBUG nova.compute.manager [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] No waiting events found dispatching network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.927 280943 WARNING nova.compute.manager [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received unexpected event network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da for instance with vm_state active and task_state None.#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.927 280943 DEBUG nova.compute.manager [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received event network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.927 280943 DEBUG oslo_concurrency.lockutils [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.927 280943 DEBUG oslo_concurrency.lockutils [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.928 280943 DEBUG oslo_concurrency.lockutils [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.928 280943 DEBUG nova.compute.manager [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] No waiting events found dispatching network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.928 280943 WARNING nova.compute.manager [req-8a4dfa3a-a14c-473f-a71c-fc4c6198f2be req-f7e53a64-2545-43d0-9cc4-dbb634568a3e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received unexpected event network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da for instance with vm_state active and task_state None.#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.928 280943 DEBUG nova.compute.manager [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.934 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.939 280943 DEBUG nova.virt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.939 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] VM Resumed (Lifecycle Event)#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.946 280943 INFO nova.virt.libvirt.driver [-] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Instance spawned successfully.#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.946 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Nov 23 04:58:33 localhost podman[310385]: Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.957 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.961 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Nov 23 04:58:33 localhost podman[310385]: 2025-11-23 09:58:33.966552646 +0000 UTC m=+0.096245996 container create feb7edb14b1de3bf2ab1ecda9ee2982024ab6bbceb77c80761799948d535aee0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.973 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.973 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.974 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.974 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.975 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.976 280943 DEBUG nova.virt.libvirt.driver [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 23 04:58:33 localhost nova_compute[280939]: 2025-11-23 09:58:33.986 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Nov 23 04:58:34 localhost podman[310385]: 2025-11-23 09:58:33.90771323 +0000 UTC m=+0.037406620 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Nov 23 04:58:34 localhost systemd[1]: Started libpod-conmon-feb7edb14b1de3bf2ab1ecda9ee2982024ab6bbceb77c80761799948d535aee0.scope. Nov 23 04:58:34 localhost systemd[1]: Started libcrun container. Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.031 280943 INFO nova.compute.manager [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Took 8.33 seconds to spawn the instance on the hypervisor.#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.032 280943 DEBUG nova.compute.manager [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:58:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95fa93415698daeffab176aa309d78c490885f5c7af6f48ad32a73a5fc5a350c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 04:58:34 localhost podman[310385]: 2025-11-23 09:58:34.045433248 +0000 UTC m=+0.175126568 container init feb7edb14b1de3bf2ab1ecda9ee2982024ab6bbceb77c80761799948d535aee0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Nov 23 04:58:34 localhost podman[310385]: 2025-11-23 09:58:34.055863318 +0000 UTC m=+0.185556638 container start feb7edb14b1de3bf2ab1ecda9ee2982024ab6bbceb77c80761799948d535aee0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS) Nov 23 04:58:34 localhost neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3[310431]: [NOTICE] (310435) : New worker (310437) forked Nov 23 04:58:34 localhost neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3[310431]: [NOTICE] (310435) : Loading success. Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.101 280943 INFO nova.compute.manager [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Took 9.21 seconds to build instance.#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.112 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 27d340a7-60a4-4a73-9f16-bae5ab3411da in datapath 81348c6d-951a-4399-8703-476056b57fe9 unbound from our chassis#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.115 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port 39a8e27a-b55c-4851-a692-f698c4532f2d IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.115 280943 DEBUG oslo_concurrency.lockutils [None req-dc62ce45-8668-47e6-9d5e-2f0b1764537e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lock "1148b5a9-4da9-491f-8952-80c4a965fe6b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 9.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.116 159415 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81348c6d-951a-4399-8703-476056b57fe9#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.130 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[2f44e042-cae3-4bbe-a650-384f51f7d3ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.130 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81348c6d-91 in ovnmeta-81348c6d-951a-4399-8703-476056b57fe9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.132 308301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81348c6d-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.133 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[e83f2566-2521-4f6e-b66f-b45fc204fac3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.134 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[b57137cf-0b4f-4ddc-8ea5-37dece702d7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.152 159521 DEBUG oslo.privsep.daemon [-] privsep: reply[899f15e2-69c8-474f-8c50-3b01f6ab3952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.162 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[01351e29-1691-4955-a122-8534640b137e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.180 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea563e4-e4c5-4fde-8a03-b95a72bc832f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.185 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c32793-720d-4477-93cb-096b62c1c7ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:34 localhost NetworkManager[5966]: [1763891914.1869] manager: (tap81348c6d-90): new Veth device (/org/freedesktop/NetworkManager/Devices/21) Nov 23 04:58:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 04:58:34 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 04:58:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 04:58:34 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:58:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:58:34 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/200617908' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:58:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.213 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[fb25417f-3137-444a-8ae9-c63653c6c0b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.216 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[01f3a389-31e1-4446-a784-f673d6bde74d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:34 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:58:34 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev c3dc02ca-42ff-480e-9ea1-3e09cad1d981 (Updating node-proxy deployment (+3 -> 3)) Nov 23 04:58:34 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev c3dc02ca-42ff-480e-9ea1-3e09cad1d981 (Updating node-proxy deployment (+3 -> 3)) Nov 23 04:58:34 localhost ceph-mgr[286671]: [progress INFO root] Completed event c3dc02ca-42ff-480e-9ea1-3e09cad1d981 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 04:58:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 04:58:34 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.226 280943 DEBUG oslo_concurrency.processutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.235 280943 DEBUG nova.compute.provider_tree [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Updating inventory in ProviderTree for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 23 04:58:34 localhost NetworkManager[5966]: [1763891914.2365] device (tap81348c6d-90): carrier: link connected Nov 23 04:58:34 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap81348c6d-91: link becomes ready Nov 23 04:58:34 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap81348c6d-90: link becomes ready Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.239 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[20abf61b-d0df-410f-a1bb-d6253d6a8f4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.253 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff0875d-9f63-4328-9349-e90ec5a20a66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81348c6d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:6f:4c:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1185064, 'reachable_time': 44295, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310470, 'error': None, 'target': 'ovnmeta-81348c6d-951a-4399-8703-476056b57fe9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.264 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5e82fb-148e-46b6-bde9-a611a979ecb1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6f:4c93'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1185064, 'tstamp': 1185064}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310477, 'error': None, 'target': 'ovnmeta-81348c6d-951a-4399-8703-476056b57fe9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.280 280943 DEBUG nova.scheduler.client.report [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Updated inventory for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 with generation 7 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.280 280943 DEBUG nova.compute.provider_tree [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Updating resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052 generation from 7 to 8 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.281 280943 DEBUG nova.compute.provider_tree [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Updating inventory in ProviderTree for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.287 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb96fdc-5534-4770-b82f-ae346c0022ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81348c6d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:6f:4c:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1185064, 'reachable_time': 44295, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310479, 'error': None, 'target': 'ovnmeta-81348c6d-951a-4399-8703-476056b57fe9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.304 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.468s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.304 280943 DEBUG nova.compute.manager [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.306 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[bb2e4bb6-6552-423b-814e-2fbb1ddfe364]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.346 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[c4da4c42-3a4b-4621-a6b5-f237d3d85cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.347 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81348c6d-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.347 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.348 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81348c6d-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:34 localhost kernel: device tap81348c6d-90 entered promiscuous mode Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.351 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.353 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81348c6d-90, col_values=(('external_ids', {'iface-id': 'bb526e17-a505-43fd-a1af-511960f787ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:34 localhost ovn_controller[153771]: 2025-11-23T09:58:34Z|00064|binding|INFO|Releasing lport bb526e17-a505-43fd-a1af-511960f787ee from this chassis (sb_readonly=0) Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.355 159415 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81348c6d-951a-4399-8703-476056b57fe9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81348c6d-951a-4399-8703-476056b57fe9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.356 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[6a60d1cf-275f-46e3-8a5b-c08c42d26891]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.356 159415 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: global Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: log /dev/log local0 debug Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: log-tag haproxy-metadata-proxy-81348c6d-951a-4399-8703-476056b57fe9 Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: user root Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: group root Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: maxconn 1024 Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: pidfile /var/lib/neutron/external/pids/81348c6d-951a-4399-8703-476056b57fe9.pid.haproxy Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: daemon Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: defaults Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: log global Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: mode http Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: option httplog Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: option dontlognull Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: option http-server-close Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: option forwardfor Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: retries 3 Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: timeout http-request 30s Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: timeout connect 30s Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: timeout client 32s Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: timeout server 32s Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: timeout http-keep-alive 30s Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: listen listener Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: bind 169.254.169.254:80 Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: server metadata /var/lib/neutron/metadata_proxy Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: http-request add-header X-OVN-Network-ID 81348c6d-951a-4399-8703-476056b57fe9 Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Nov 23 04:58:34 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:34.357 159415 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81348c6d-951a-4399-8703-476056b57fe9', 'env', 'PROCESS_TAG=haproxy-81348c6d-951a-4399-8703-476056b57fe9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81348c6d-951a-4399-8703-476056b57fe9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.361 280943 DEBUG nova.compute.manager [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.362 280943 DEBUG nova.network.neutron [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.366 280943 INFO nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.382 280943 DEBUG nova.compute.manager [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.412 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:34 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:58:34 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.455 280943 DEBUG nova.compute.manager [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.457 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.457 280943 INFO nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Creating image(s)#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.486 280943 DEBUG nova.storage.rbd_utils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] rbd image 8f62292f-5719-4b19-9188-3715b94493a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.518 280943 DEBUG nova.storage.rbd_utils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] rbd image 8f62292f-5719-4b19-9188-3715b94493a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.552 280943 DEBUG nova.storage.rbd_utils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] rbd image 8f62292f-5719-4b19-9188-3715b94493a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.557 280943 DEBUG oslo_concurrency.processutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.651 280943 DEBUG oslo_concurrency.processutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.652 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Acquiring lock "ba971d9ef74673015953b46ad4dbea47e54dd66a" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.652 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Lock "ba971d9ef74673015953b46ad4dbea47e54dd66a" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.653 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Lock "ba971d9ef74673015953b46ad4dbea47e54dd66a" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.683 280943 DEBUG nova.storage.rbd_utils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] rbd image 8f62292f-5719-4b19-9188-3715b94493a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.698 280943 DEBUG oslo_concurrency.processutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a 8f62292f-5719-4b19-9188-3715b94493a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:34 localhost nova_compute[280939]: 2025-11-23 09:58:34.715 280943 DEBUG nova.policy [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7f7875c0084c46fdb2e7b37e4fc44faf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '253c88568a634476a6c1284eed6a9464', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Nov 23 04:58:34 localhost podman[310588]: Nov 23 04:58:34 localhost podman[310588]: 2025-11-23 09:58:34.850760553 +0000 UTC m=+0.098595018 container create bb4da3fddd9b62f10f20d3941876f5a27e36cfcdf20d05bb50c87a6ed6926caf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Nov 23 04:58:34 localhost podman[310588]: 2025-11-23 09:58:34.806204305 +0000 UTC m=+0.054038790 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Nov 23 04:58:34 localhost systemd[1]: Started libpod-conmon-bb4da3fddd9b62f10f20d3941876f5a27e36cfcdf20d05bb50c87a6ed6926caf.scope. Nov 23 04:58:34 localhost systemd[1]: Started libcrun container. Nov 23 04:58:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52ee36198be2f9bcbaef19ab297d26fc694e47ced3f6dcf4d89ec4c4844e288/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 04:58:34 localhost podman[310588]: 2025-11-23 09:58:34.949898967 +0000 UTC m=+0.197733432 container init bb4da3fddd9b62f10f20d3941876f5a27e36cfcdf20d05bb50c87a6ed6926caf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118) Nov 23 04:58:34 localhost podman[310588]: 2025-11-23 09:58:34.961405961 +0000 UTC m=+0.209240426 container start bb4da3fddd9b62f10f20d3941876f5a27e36cfcdf20d05bb50c87a6ed6926caf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:58:34 localhost neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9[310619]: [NOTICE] (310623) : New worker (310625) forked Nov 23 04:58:34 localhost neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9[310619]: [NOTICE] (310623) : Loading success. Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.196 280943 DEBUG oslo_concurrency.processutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a 8f62292f-5719-4b19-9188-3715b94493a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.279 280943 DEBUG nova.storage.rbd_utils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] resizing rbd image 8f62292f-5719-4b19-9188-3715b94493a7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Nov 23 04:58:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v104: 177 pgs: 177 active+clean; 317 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 4.6 MiB/s rd, 5.7 MiB/s wr, 169 op/s Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.427 280943 DEBUG nova.objects.instance [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Lazy-loading 'migration_context' on Instance uuid 8f62292f-5719-4b19-9188-3715b94493a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.443 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.443 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Ensure instance console log exists: /var/lib/nova/instances/8f62292f-5719-4b19-9188-3715b94493a7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.444 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.445 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.445 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.740 280943 DEBUG oslo_concurrency.lockutils [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Acquiring lock "76d6f171-13c9-4730-8ed3-ab467ef6831a" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.741 280943 DEBUG oslo_concurrency.lockutils [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.742 280943 DEBUG oslo_concurrency.lockutils [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Acquiring lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.742 280943 DEBUG oslo_concurrency.lockutils [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.743 280943 DEBUG oslo_concurrency.lockutils [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.745 280943 INFO nova.compute.manager [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Terminating instance#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.746 280943 DEBUG nova.compute.manager [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Nov 23 04:58:35 localhost kernel: device tap27d340a7-60 left promiscuous mode Nov 23 04:58:35 localhost NetworkManager[5966]: [1763891915.8172] device (tap27d340a7-60): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.828 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:35 localhost ovn_controller[153771]: 2025-11-23T09:58:35Z|00065|binding|INFO|Releasing lport 27d340a7-60a4-4a73-9f16-bae5ab3411da from this chassis (sb_readonly=0) Nov 23 04:58:35 localhost ovn_controller[153771]: 2025-11-23T09:58:35Z|00066|binding|INFO|Setting lport 27d340a7-60a4-4a73-9f16-bae5ab3411da down in Southbound Nov 23 04:58:35 localhost ovn_controller[153771]: 2025-11-23T09:58:35Z|00067|binding|INFO|Releasing lport b779be61-5809-44a6-8395-bfdf8254b4cc from this chassis (sb_readonly=0) Nov 23 04:58:35 localhost ovn_controller[153771]: 2025-11-23T09:58:35Z|00068|binding|INFO|Setting lport b779be61-5809-44a6-8395-bfdf8254b4cc down in Southbound Nov 23 04:58:35 localhost ovn_controller[153771]: 2025-11-23T09:58:35Z|00069|binding|INFO|Removing iface tap27d340a7-60 ovn-installed in OVS Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.831 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:35 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:35.842 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:5d:7d 19.80.0.7'], port_security=['fa:16:3e:e3:5d:7d 19.80.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['27d340a7-60a4-4a73-9f16-bae5ab3411da'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-711090127', 'neutron:cidrs': '19.80.0.7/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-711090127', 'neutron:project_id': 'a2148c18d8f24a6db12dc22c787e8b2e', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ff44a28d-1e1f-4163-b206-fdf77022bf0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=0e3b2035-d1e3-4dc9-824d-c8c5d8c83090, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=b779be61-5809-44a6-8395-bfdf8254b4cc) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:58:35 localhost ovn_controller[153771]: 2025-11-23T09:58:35Z|00070|binding|INFO|Releasing lport a8a61203-fe2e-4005-bcf2-6150709eadea from this chassis (sb_readonly=0) Nov 23 04:58:35 localhost ovn_controller[153771]: 2025-11-23T09:58:35Z|00071|binding|INFO|Releasing lport 6df03061-a46e-4f2d-b42f-4f149f759e31 from this chassis (sb_readonly=0) Nov 23 04:58:35 localhost ovn_controller[153771]: 2025-11-23T09:58:35Z|00072|binding|INFO|Releasing lport bb526e17-a505-43fd-a1af-511960f787ee from this chassis (sb_readonly=0) Nov 23 04:58:35 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:35.845 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:c3:5c 10.100.0.7'], port_security=['fa:16:3e:fe:c3:5c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-2092561411', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '76d6f171-13c9-4730-8ed3-ab467ef6831a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81348c6d-951a-4399-8703-476056b57fe9', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-2092561411', 'neutron:project_id': 'a2148c18d8f24a6db12dc22c787e8b2e', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'ff44a28d-1e1f-4163-b206-fdf77022bf0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1897b64f-0c37-45be-8353-f858f64309cd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=27d340a7-60a4-4a73-9f16-bae5ab3411da) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:58:35 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:35.846 159415 INFO neutron.agent.ovn.metadata.agent [-] Port b779be61-5809-44a6-8395-bfdf8254b4cc in datapath 8cd987c4-7e4e-467f-9ee2-d70cb75b87c3 unbound from our chassis#033[00m Nov 23 04:58:35 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:35.857 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8cd987c4-7e4e-467f-9ee2-d70cb75b87c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 04:58:35 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:35.858 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[0d215de8-b552-4d9f-b0a8-655edaa3ac5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:35 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:35.859 159415 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3 namespace which is not needed anymore#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.868 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:35 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Deactivated successfully. Nov 23 04:58:35 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Consumed 1.871s CPU time. Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.887 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:35 localhost systemd-machined[202731]: Machine qemu-1-instance-00000006 terminated. Nov 23 04:58:35 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:35.948 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005532584.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:58:24Z, description=, device_id=8f62292f-5719-4b19-9188-3715b94493a7, device_owner=compute:nova, dns_assignment=[], dns_domain=, dns_name=tempest-livemigrationtest-server-1576780525, extra_dhcp_opts=[], fixed_ips=[], id=737e82a6-2634-47df-b8a7-ec21a927cc3f, ip_allocation=immediate, mac_address=fa:16:3e:da:21:74, name=tempest-parent-1925970765, network_id=d679e465-8656-4403-afa0-724657d33ec4, port_security_enabled=True, project_id=253c88568a634476a6c1284eed6a9464, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['a3350144-9b09-432b-a32e-ef84bb8bf494'], standard_attr_id=645, status=DOWN, tags=[], tenant_id=253c88568a634476a6c1284eed6a9464, trunk_details=sub_ports=[], trunk_id=c4a0969a-aee9-4b3f-bd50-6138befdbf0e, updated_at=2025-11-23T09:58:34Z on network d679e465-8656-4403-afa0-724657d33ec4#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.967 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.975 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.997 280943 INFO nova.virt.libvirt.driver [-] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Instance destroyed successfully.#033[00m Nov 23 04:58:35 localhost nova_compute[280939]: 2025-11-23 09:58:35.997 280943 DEBUG nova.objects.instance [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lazy-loading 'resources' on Instance uuid 76d6f171-13c9-4730-8ed3-ab467ef6831a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.012 280943 DEBUG nova.virt.libvirt.vif [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-23T09:58:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-151326874',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005532584.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-151326874',id=6,image_ref='c5806483-57a8-4254-b41b-254b888c8606',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-11-23T09:58:16Z,launched_on='np0005532586.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005532584.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='a2148c18d8f24a6db12dc22c787e8b2e',ramdisk_id='',reservation_id='r-6eghyq4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c5806483-57a8-4254-b41b-254b888c8606',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1734069518',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1734069518-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2025-11-23T09:58:33Z,user_data=None,user_id='9a28cb0574d148bf982a2a1a0b495020',uuid=76d6f171-13c9-4730-8ed3-ab467ef6831a,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.012 280943 DEBUG nova.network.os_vif_util [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Converting VIF {"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.013 280943 DEBUG nova.network.os_vif_util [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c3:5c,bridge_name='br-int',has_traffic_filtering=True,id=27d340a7-60a4-4a73-9f16-bae5ab3411da,network=Network(81348c6d-951a-4399-8703-476056b57fe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap27d340a7-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.014 280943 DEBUG os_vif [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c3:5c,bridge_name='br-int',has_traffic_filtering=True,id=27d340a7-60a4-4a73-9f16-bae5ab3411da,network=Network(81348c6d-951a-4399-8703-476056b57fe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap27d340a7-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Nov 23 04:58:36 localhost neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3[310431]: [NOTICE] (310435) : haproxy version is 2.8.14-c23fe91 Nov 23 04:58:36 localhost neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3[310431]: [NOTICE] (310435) : path to executable is /usr/sbin/haproxy Nov 23 04:58:36 localhost neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3[310431]: [ALERT] (310435) : Current worker (310437) exited with code 143 (Terminated) Nov 23 04:58:36 localhost neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3[310431]: [WARNING] (310435) : All workers exited. Exiting... (0) Nov 23 04:58:36 localhost systemd[1]: libpod-feb7edb14b1de3bf2ab1ecda9ee2982024ab6bbceb77c80761799948d535aee0.scope: Deactivated successfully. Nov 23 04:58:36 localhost podman[310734]: 2025-11-23 09:58:36.107375345 +0000 UTC m=+0.091942814 container died feb7edb14b1de3bf2ab1ecda9ee2982024ab6bbceb77c80761799948d535aee0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118) Nov 23 04:58:36 localhost systemd[1]: tmp-crun.FRjFra.mount: Deactivated successfully. Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.143 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.144 280943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27d340a7-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.145 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.147 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.150 280943 INFO os_vif [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c3:5c,bridge_name='br-int',has_traffic_filtering=True,id=27d340a7-60a4-4a73-9f16-bae5ab3411da,network=Network(81348c6d-951a-4399-8703-476056b57fe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap27d340a7-60')#033[00m Nov 23 04:58:36 localhost podman[310734]: 2025-11-23 09:58:36.16517337 +0000 UTC m=+0.149740849 container cleanup feb7edb14b1de3bf2ab1ecda9ee2982024ab6bbceb77c80761799948d535aee0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true) Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.189 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:36 localhost podman[310756]: 2025-11-23 09:58:36.206398595 +0000 UTC m=+0.090075587 container cleanup feb7edb14b1de3bf2ab1ecda9ee2982024ab6bbceb77c80761799948d535aee0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Nov 23 04:58:36 localhost systemd[1]: libpod-conmon-feb7edb14b1de3bf2ab1ecda9ee2982024ab6bbceb77c80761799948d535aee0.scope: Deactivated successfully. Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.241 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:36 localhost ovn_controller[153771]: 2025-11-23T09:58:36Z|00073|binding|INFO|Releasing lport a8a61203-fe2e-4005-bcf2-6150709eadea from this chassis (sb_readonly=0) Nov 23 04:58:36 localhost ovn_controller[153771]: 2025-11-23T09:58:36Z|00074|binding|INFO|Releasing lport 6df03061-a46e-4f2d-b42f-4f149f759e31 from this chassis (sb_readonly=0) Nov 23 04:58:36 localhost ovn_controller[153771]: 2025-11-23T09:58:36Z|00075|binding|INFO|Releasing lport bb526e17-a505-43fd-a1af-511960f787ee from this chassis (sb_readonly=0) Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.271 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.304 280943 DEBUG nova.compute.manager [req-86c28a05-949c-410c-9b23-a8eb5ced491c req-ebd89f20-0bf0-413f-a8b1-3bdb55202913 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received event network-vif-unplugged-27d340a7-60a4-4a73-9f16-bae5ab3411da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.305 280943 DEBUG oslo_concurrency.lockutils [req-86c28a05-949c-410c-9b23-a8eb5ced491c req-ebd89f20-0bf0-413f-a8b1-3bdb55202913 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.305 280943 DEBUG oslo_concurrency.lockutils [req-86c28a05-949c-410c-9b23-a8eb5ced491c req-ebd89f20-0bf0-413f-a8b1-3bdb55202913 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.305 280943 DEBUG oslo_concurrency.lockutils [req-86c28a05-949c-410c-9b23-a8eb5ced491c req-ebd89f20-0bf0-413f-a8b1-3bdb55202913 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.306 280943 DEBUG nova.compute.manager [req-86c28a05-949c-410c-9b23-a8eb5ced491c req-ebd89f20-0bf0-413f-a8b1-3bdb55202913 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] No waiting events found dispatching network-vif-unplugged-27d340a7-60a4-4a73-9f16-bae5ab3411da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.306 280943 DEBUG nova.compute.manager [req-86c28a05-949c-410c-9b23-a8eb5ced491c req-ebd89f20-0bf0-413f-a8b1-3bdb55202913 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received event network-vif-unplugged-27d340a7-60a4-4a73-9f16-bae5ab3411da for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Nov 23 04:58:36 localhost podman[310794]: 2025-11-23 09:58:36.311021737 +0000 UTC m=+0.125814674 container remove feb7edb14b1de3bf2ab1ecda9ee2982024ab6bbceb77c80761799948d535aee0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.315 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[1b2664f0-6c2e-4bc3-b2f7-2fa81fd290d0]: (4, ('Sun Nov 23 09:58:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3 (feb7edb14b1de3bf2ab1ecda9ee2982024ab6bbceb77c80761799948d535aee0)\nfeb7edb14b1de3bf2ab1ecda9ee2982024ab6bbceb77c80761799948d535aee0\nSun Nov 23 09:58:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3 (feb7edb14b1de3bf2ab1ecda9ee2982024ab6bbceb77c80761799948d535aee0)\nfeb7edb14b1de3bf2ab1ecda9ee2982024ab6bbceb77c80761799948d535aee0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.316 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[9920e68b-2022-46c1-96cb-6693320d673c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.317 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8cd987c4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.319 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:36 localhost kernel: device tap8cd987c4-70 left promiscuous mode Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.331 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.332 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.334 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[50ef5e3b-6cc3-4f62-a9dd-b3b5cc3f0472]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.348 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[712a2f49-f411-42e2-be8c-6b78da2c799f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.349 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[fa966b39-ca59-4d15-b1c4-2175e7d2d3df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.365 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[4be4d07d-127b-44f8-9e0c-381e3aa970c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1184969, 'reachable_time': 36154, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310823, 'error': None, 'target': 'ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.374 159521 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.375 159521 DEBUG oslo.privsep.daemon [-] privsep: reply[92acc62c-bd3b-4c9d-9b62-022c99fa6562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.376 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 27d340a7-60a4-4a73-9f16-bae5ab3411da in datapath 81348c6d-951a-4399-8703-476056b57fe9 unbound from our chassis#033[00m Nov 23 04:58:36 localhost dnsmasq[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/addn_hosts - 2 addresses Nov 23 04:58:36 localhost dnsmasq-dhcp[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/host Nov 23 04:58:36 localhost podman[310811]: 2025-11-23 09:58:36.377064055 +0000 UTC m=+0.108361689 container kill 5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d679e465-8656-4403-afa0-724657d33ec4, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:58:36 localhost dnsmasq-dhcp[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/opts Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.381 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port 39a8e27a-b55c-4851-a692-f698c4532f2d IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.381 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81348c6d-951a-4399-8703-476056b57fe9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.382 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5056a7-77e3-408f-89d7-80a0d6e00763]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.383 159415 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81348c6d-951a-4399-8703-476056b57fe9 namespace which is not needed anymore#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.463 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:36 localhost neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9[310619]: [NOTICE] (310623) : haproxy version is 2.8.14-c23fe91 Nov 23 04:58:36 localhost neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9[310619]: [NOTICE] (310623) : path to executable is /usr/sbin/haproxy Nov 23 04:58:36 localhost neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9[310619]: [WARNING] (310623) : Exiting Master process... Nov 23 04:58:36 localhost neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9[310619]: [ALERT] (310623) : Current worker (310625) exited with code 143 (Terminated) Nov 23 04:58:36 localhost neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9[310619]: [WARNING] (310623) : All workers exited. Exiting... (0) Nov 23 04:58:36 localhost systemd[1]: libpod-bb4da3fddd9b62f10f20d3941876f5a27e36cfcdf20d05bb50c87a6ed6926caf.scope: Deactivated successfully. Nov 23 04:58:36 localhost podman[310849]: 2025-11-23 09:58:36.552817461 +0000 UTC m=+0.071101144 container died bb4da3fddd9b62f10f20d3941876f5a27e36cfcdf20d05bb50c87a6ed6926caf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 23 04:58:36 localhost podman[310849]: 2025-11-23 09:58:36.576867959 +0000 UTC m=+0.095151572 container cleanup bb4da3fddd9b62f10f20d3941876f5a27e36cfcdf20d05bb50c87a6ed6926caf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 23 04:58:36 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:36.581 262301 INFO neutron.agent.dhcp.agent [None req-245628f5-95c9-4870-ac1d-6278c3e56b5a - - - - - -] DHCP configuration for ports {'737e82a6-2634-47df-b8a7-ec21a927cc3f'} is completed#033[00m Nov 23 04:58:36 localhost podman[310867]: 2025-11-23 09:58:36.629185735 +0000 UTC m=+0.070688131 container cleanup bb4da3fddd9b62f10f20d3941876f5a27e36cfcdf20d05bb50c87a6ed6926caf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0) Nov 23 04:58:36 localhost systemd[1]: libpod-conmon-bb4da3fddd9b62f10f20d3941876f5a27e36cfcdf20d05bb50c87a6ed6926caf.scope: Deactivated successfully. Nov 23 04:58:36 localhost podman[310881]: 2025-11-23 09:58:36.671182325 +0000 UTC m=+0.069554707 container remove bb4da3fddd9b62f10f20d3941876f5a27e36cfcdf20d05bb50c87a6ed6926caf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118) Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.682 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[2a094e5f-f3e6-4027-8344-c1b7a084e271]: (4, ('Sun Nov 23 09:58:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9 (bb4da3fddd9b62f10f20d3941876f5a27e36cfcdf20d05bb50c87a6ed6926caf)\nbb4da3fddd9b62f10f20d3941876f5a27e36cfcdf20d05bb50c87a6ed6926caf\nSun Nov 23 09:58:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9 (bb4da3fddd9b62f10f20d3941876f5a27e36cfcdf20d05bb50c87a6ed6926caf)\nbb4da3fddd9b62f10f20d3941876f5a27e36cfcdf20d05bb50c87a6ed6926caf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.685 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[17c69ef1-f3bf-42c2-b3b5-f00aa4669f67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.688 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81348c6d-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.692 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:36 localhost kernel: device tap81348c6d-90 left promiscuous mode Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.702 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.703 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.706 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[73202cee-cebd-49b0-9d4a-7e99661c14cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:36 localhost openstack_network_exporter[241732]: ERROR 09:58:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:58:36 localhost openstack_network_exporter[241732]: ERROR 09:58:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:58:36 localhost openstack_network_exporter[241732]: ERROR 09:58:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:58:36 localhost openstack_network_exporter[241732]: ERROR 09:58:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:58:36 localhost openstack_network_exporter[241732]: Nov 23 04:58:36 localhost openstack_network_exporter[241732]: ERROR 09:58:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:58:36 localhost openstack_network_exporter[241732]: Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.723 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[70f3929c-d52c-4bc3-b7e1-31d3cb13dc09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.731 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[5cc6a4f8-6cbd-485c-9443-9f26547f6d86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.759 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[14433f6b-b87b-46c3-a5d4-1141da5c7366]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1185058, 'reachable_time': 30477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310898, 'error': None, 'target': 'ovnmeta-81348c6d-951a-4399-8703-476056b57fe9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.761 159521 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81348c6d-951a-4399-8703-476056b57fe9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Nov 23 04:58:36 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:36.761 159521 DEBUG oslo.privsep.daemon [-] privsep: reply[43d69ced-78e9-489a-8847-368d3c5fb816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.799 280943 INFO nova.virt.libvirt.driver [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Deleting instance files /var/lib/nova/instances/76d6f171-13c9-4730-8ed3-ab467ef6831a_del#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.800 280943 INFO nova.virt.libvirt.driver [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Deletion of /var/lib/nova/instances/76d6f171-13c9-4730-8ed3-ab467ef6831a_del complete#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.883 280943 DEBUG nova.virt.libvirt.host [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.884 280943 INFO nova.virt.libvirt.host [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] UEFI support detected#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.886 280943 INFO nova.compute.manager [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Took 1.14 seconds to destroy the instance on the hypervisor.#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.886 280943 DEBUG oslo.service.loopingcall [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.887 280943 DEBUG nova.compute.manager [-] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.887 280943 DEBUG nova.network.neutron [-] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.927 280943 DEBUG nova.network.neutron [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Successfully updated port: 737e82a6-2634-47df-b8a7-ec21a927cc3f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.939 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Acquiring lock "refresh_cache-8f62292f-5719-4b19-9188-3715b94493a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.940 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Acquired lock "refresh_cache-8f62292f-5719-4b19-9188-3715b94493a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 23 04:58:36 localhost nova_compute[280939]: 2025-11-23 09:58:36.940 280943 DEBUG nova.network.neutron [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.022 280943 DEBUG nova.network.neutron [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Nov 23 04:58:37 localhost systemd[1]: var-lib-containers-storage-overlay-f52ee36198be2f9bcbaef19ab297d26fc694e47ced3f6dcf4d89ec4c4844e288-merged.mount: Deactivated successfully. Nov 23 04:58:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb4da3fddd9b62f10f20d3941876f5a27e36cfcdf20d05bb50c87a6ed6926caf-userdata-shm.mount: Deactivated successfully. Nov 23 04:58:37 localhost systemd[1]: run-netns-ovnmeta\x2d81348c6d\x2d951a\x2d4399\x2d8703\x2d476056b57fe9.mount: Deactivated successfully. Nov 23 04:58:37 localhost systemd[1]: var-lib-containers-storage-overlay-95fa93415698daeffab176aa309d78c490885f5c7af6f48ad32a73a5fc5a350c-merged.mount: Deactivated successfully. Nov 23 04:58:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-feb7edb14b1de3bf2ab1ecda9ee2982024ab6bbceb77c80761799948d535aee0-userdata-shm.mount: Deactivated successfully. Nov 23 04:58:37 localhost systemd[1]: run-netns-ovnmeta\x2d8cd987c4\x2d7e4e\x2d467f\x2d9ee2\x2dd70cb75b87c3.mount: Deactivated successfully. Nov 23 04:58:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v105: 177 pgs: 177 active+clean; 317 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 5.7 MiB/s wr, 142 op/s Nov 23 04:58:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.714 280943 DEBUG nova.network.neutron [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Updating instance_info_cache with network_info: [{"id": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "address": "fa:16:3e:da:21:74", "network": {"id": "d679e465-8656-4403-afa0-724657d33ec4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-49202206-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "253c88568a634476a6c1284eed6a9464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap737e82a6-26", "ovs_interfaceid": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.735 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Releasing lock "refresh_cache-8f62292f-5719-4b19-9188-3715b94493a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.736 280943 DEBUG nova.compute.manager [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Instance network_info: |[{"id": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "address": "fa:16:3e:da:21:74", "network": {"id": "d679e465-8656-4403-afa0-724657d33ec4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-49202206-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "253c88568a634476a6c1284eed6a9464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap737e82a6-26", "ovs_interfaceid": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.740 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Start _get_guest_xml network_info=[{"id": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "address": "fa:16:3e:da:21:74", "network": {"id": "d679e465-8656-4403-afa0-724657d33ec4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-49202206-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "253c88568a634476a6c1284eed6a9464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap737e82a6-26", "ovs_interfaceid": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T09:56:45Z,direct_url=,disk_format='qcow2',id=c5806483-57a8-4254-b41b-254b888c8606,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1915d3e5d4254231a0517e2dcf35848f',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-11-23T09:56:47Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'device_type': 'disk', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'size': 0, 'image_id': 'c5806483-57a8-4254-b41b-254b888c8606'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.745 280943 WARNING nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.747 280943 DEBUG nova.virt.libvirt.host [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Searching host: 'np0005532584.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.748 280943 DEBUG nova.virt.libvirt.host [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.750 280943 DEBUG nova.virt.libvirt.host [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Searching host: 'np0005532584.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.750 280943 DEBUG nova.virt.libvirt.host [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.751 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.751 280943 DEBUG nova.virt.hardware [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T09:56:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43b374b4-75d9-47f9-aa6b-ddb1a45f7c04',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T09:56:45Z,direct_url=,disk_format='qcow2',id=c5806483-57a8-4254-b41b-254b888c8606,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1915d3e5d4254231a0517e2dcf35848f',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-11-23T09:56:47Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.751 280943 DEBUG nova.virt.hardware [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.752 280943 DEBUG nova.virt.hardware [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.752 280943 DEBUG nova.virt.hardware [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.752 280943 DEBUG nova.virt.hardware [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.753 280943 DEBUG nova.virt.hardware [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.753 280943 DEBUG nova.virt.hardware [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.753 280943 DEBUG nova.virt.hardware [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.754 280943 DEBUG nova.virt.hardware [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.754 280943 DEBUG nova.virt.hardware [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.754 280943 DEBUG nova.virt.hardware [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Nov 23 04:58:37 localhost nova_compute[280939]: 2025-11-23 09:58:37.758 280943 DEBUG oslo_concurrency.processutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 04:58:38 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3145656533' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 04:58:38 localhost nova_compute[280939]: 2025-11-23 09:58:38.220 280943 DEBUG oslo_concurrency.processutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:38 localhost nova_compute[280939]: 2025-11-23 09:58:38.257 280943 DEBUG nova.storage.rbd_utils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] rbd image 8f62292f-5719-4b19-9188-3715b94493a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:58:38 localhost nova_compute[280939]: 2025-11-23 09:58:38.261 280943 DEBUG oslo_concurrency.processutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:38 localhost nova_compute[280939]: 2025-11-23 09:58:38.365 280943 DEBUG nova.compute.manager [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Received event network-changed-a1846659-6b91-4156-9939-085b30454143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:58:38 localhost nova_compute[280939]: 2025-11-23 09:58:38.365 280943 DEBUG nova.compute.manager [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Refreshing instance network info cache due to event network-changed-a1846659-6b91-4156-9939-085b30454143. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Nov 23 04:58:38 localhost nova_compute[280939]: 2025-11-23 09:58:38.365 280943 DEBUG oslo_concurrency.lockutils [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "refresh_cache-1148b5a9-4da9-491f-8952-80c4a965fe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 23 04:58:38 localhost nova_compute[280939]: 2025-11-23 09:58:38.365 280943 DEBUG oslo_concurrency.lockutils [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquired lock "refresh_cache-1148b5a9-4da9-491f-8952-80c4a965fe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 23 04:58:38 localhost nova_compute[280939]: 2025-11-23 09:58:38.365 280943 DEBUG nova.network.neutron [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Refreshing network info cache for port a1846659-6b91-4156-9939-085b30454143 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Nov 23 04:58:38 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 04:58:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 04:58:38 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:58:38 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:58:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 04:58:38 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2888214571' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 04:58:38 localhost nova_compute[280939]: 2025-11-23 09:58:38.692 280943 DEBUG oslo_concurrency.processutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:38 localhost nova_compute[280939]: 2025-11-23 09:58:38.694 280943 DEBUG nova.virt.libvirt.vif [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T09:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1576780525',display_name='tempest-LiveMigrationTest-server-1576780525',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005532584.localdomain',hostname='tempest-livemigrationtest-server-1576780525',id=10,image_ref='c5806483-57a8-4254-b41b-254b888c8606',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005532584.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005532584.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='253c88568a634476a6c1284eed6a9464',ramdisk_id='',reservation_id='r-dvg5v145',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c5806483-57a8-4254-b41b-254b888c8606',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1889456510',owner_user_name='tempest-LiveMigrationTest-1889456510-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T09:58:34Z,user_data=None,user_id='7f7875c0084c46fdb2e7b37e4fc44faf',uuid=8f62292f-5719-4b19-9188-3715b94493a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "address": "fa:16:3e:da:21:74", "network": {"id": "d679e465-8656-4403-afa0-724657d33ec4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-49202206-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "253c88568a634476a6c1284eed6a9464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap737e82a6-26", "ovs_interfaceid": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Nov 23 04:58:38 localhost nova_compute[280939]: 2025-11-23 09:58:38.694 280943 DEBUG nova.network.os_vif_util [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Converting VIF {"id": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "address": "fa:16:3e:da:21:74", "network": {"id": "d679e465-8656-4403-afa0-724657d33ec4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-49202206-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "253c88568a634476a6c1284eed6a9464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap737e82a6-26", "ovs_interfaceid": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Nov 23 04:58:38 localhost nova_compute[280939]: 2025-11-23 09:58:38.695 280943 DEBUG nova.network.os_vif_util [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:21:74,bridge_name='br-int',has_traffic_filtering=True,id=737e82a6-2634-47df-b8a7-ec21a927cc3f,network=Network(d679e465-8656-4403-afa0-724657d33ec4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap737e82a6-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Nov 23 04:58:38 localhost nova_compute[280939]: 2025-11-23 09:58:38.697 280943 DEBUG nova.objects.instance [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8f62292f-5719-4b19-9188-3715b94493a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.149 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] End _get_guest_xml xml= Nov 23 04:58:39 localhost nova_compute[280939]: 8f62292f-5719-4b19-9188-3715b94493a7 Nov 23 04:58:39 localhost nova_compute[280939]: instance-0000000a Nov 23 04:58:39 localhost nova_compute[280939]: 131072 Nov 23 04:58:39 localhost nova_compute[280939]: 1 Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: tempest-LiveMigrationTest-server-1576780525 Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:37 Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: 128 Nov 23 04:58:39 localhost nova_compute[280939]: 1 Nov 23 04:58:39 localhost nova_compute[280939]: 0 Nov 23 04:58:39 localhost nova_compute[280939]: 0 Nov 23 04:58:39 localhost nova_compute[280939]: 1 Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: tempest-LiveMigrationTest-1889456510-project-member Nov 23 04:58:39 localhost nova_compute[280939]: tempest-LiveMigrationTest-1889456510 Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: RDO Nov 23 04:58:39 localhost nova_compute[280939]: OpenStack Compute Nov 23 04:58:39 localhost nova_compute[280939]: 27.5.2-0.20250829104910.6f8decf.el9 Nov 23 04:58:39 localhost nova_compute[280939]: 8f62292f-5719-4b19-9188-3715b94493a7 Nov 23 04:58:39 localhost nova_compute[280939]: 8f62292f-5719-4b19-9188-3715b94493a7 Nov 23 04:58:39 localhost nova_compute[280939]: Virtual Machine Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: hvm Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: /dev/urandom Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: Nov 23 04:58:39 localhost nova_compute[280939]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.150 280943 DEBUG nova.compute.manager [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Preparing to wait for external event network-vif-plugged-737e82a6-2634-47df-b8a7-ec21a927cc3f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.150 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Acquiring lock "8f62292f-5719-4b19-9188-3715b94493a7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.150 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.151 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.152 280943 DEBUG nova.virt.libvirt.vif [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T09:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1576780525',display_name='tempest-LiveMigrationTest-server-1576780525',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005532584.localdomain',hostname='tempest-livemigrationtest-server-1576780525',id=10,image_ref='c5806483-57a8-4254-b41b-254b888c8606',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005532584.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005532584.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='253c88568a634476a6c1284eed6a9464',ramdisk_id='',reservation_id='r-dvg5v145',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c5806483-57a8-4254-b41b-254b888c8606',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1889456510',owner_user_name='tempest-LiveMigrationTest-1889456510-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T09:58:34Z,user_data=None,user_id='7f7875c0084c46fdb2e7b37e4fc44faf',uuid=8f62292f-5719-4b19-9188-3715b94493a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "address": "fa:16:3e:da:21:74", "network": {"id": "d679e465-8656-4403-afa0-724657d33ec4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-49202206-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "253c88568a634476a6c1284eed6a9464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap737e82a6-26", "ovs_interfaceid": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.152 280943 DEBUG nova.network.os_vif_util [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Converting VIF {"id": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "address": "fa:16:3e:da:21:74", "network": {"id": "d679e465-8656-4403-afa0-724657d33ec4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-49202206-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "253c88568a634476a6c1284eed6a9464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap737e82a6-26", "ovs_interfaceid": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.153 280943 DEBUG nova.network.os_vif_util [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:21:74,bridge_name='br-int',has_traffic_filtering=True,id=737e82a6-2634-47df-b8a7-ec21a927cc3f,network=Network(d679e465-8656-4403-afa0-724657d33ec4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap737e82a6-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.153 280943 DEBUG os_vif [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:21:74,bridge_name='br-int',has_traffic_filtering=True,id=737e82a6-2634-47df-b8a7-ec21a927cc3f,network=Network(d679e465-8656-4403-afa0-724657d33ec4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap737e82a6-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.154 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.155 280943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.155 280943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.158 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.159 280943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap737e82a6-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.159 280943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap737e82a6-26, col_values=(('external_ids', {'iface-id': '737e82a6-2634-47df-b8a7-ec21a927cc3f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:21:74', 'vm-uuid': '8f62292f-5719-4b19-9188-3715b94493a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.204 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.207 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.210 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.211 280943 INFO os_vif [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:21:74,bridge_name='br-int',has_traffic_filtering=True,id=737e82a6-2634-47df-b8a7-ec21a927cc3f,network=Network(d679e465-8656-4403-afa0-724657d33ec4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap737e82a6-26')#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.259 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.259 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.260 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] No VIF found with MAC fa:16:3e:da:21:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.261 280943 INFO nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Using config drive#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.300 280943 DEBUG nova.storage.rbd_utils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] rbd image 8f62292f-5719-4b19-9188-3715b94493a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:58:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v106: 177 pgs: 177 active+clean; 285 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 7.6 MiB/s rd, 7.5 MiB/s wr, 334 op/s Nov 23 04:58:39 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:39.424 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:57:53Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=27d340a7-60a4-4a73-9f16-bae5ab3411da, ip_allocation=immediate, mac_address=fa:16:3e:fe:c3:5c, name=tempest-parent-2092561411, network_id=81348c6d-951a-4399-8703-476056b57fe9, port_security_enabled=True, project_id=a2148c18d8f24a6db12dc22c787e8b2e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=14, security_groups=['ff44a28d-1e1f-4163-b206-fdf77022bf0b'], standard_attr_id=404, status=DOWN, tags=[], tenant_id=a2148c18d8f24a6db12dc22c787e8b2e, trunk_details=sub_ports=[], trunk_id=c096332d-2835-45dd-944d-79d0f9cdb00a, updated_at=2025-11-23T09:58:37Z on network 81348c6d-951a-4399-8703-476056b57fe9#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.561 280943 INFO nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Creating config drive at /var/lib/nova/instances/8f62292f-5719-4b19-9188-3715b94493a7/disk.config#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.566 280943 DEBUG oslo_concurrency.processutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8f62292f-5719-4b19-9188-3715b94493a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_acc7zxe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.691 280943 DEBUG oslo_concurrency.processutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8f62292f-5719-4b19-9188-3715b94493a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_acc7zxe" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:39 localhost systemd[1]: tmp-crun.3W5lzA.mount: Deactivated successfully. Nov 23 04:58:39 localhost podman[311002]: 2025-11-23 09:58:39.717503315 +0000 UTC m=+0.077495381 container kill 8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-81348c6d-951a-4399-8703-476056b57fe9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 04:58:39 localhost dnsmasq[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/addn_hosts - 2 addresses Nov 23 04:58:39 localhost dnsmasq-dhcp[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/host Nov 23 04:58:39 localhost dnsmasq-dhcp[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/opts Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.761 280943 DEBUG nova.storage.rbd_utils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] rbd image 8f62292f-5719-4b19-9188-3715b94493a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.771 280943 DEBUG oslo_concurrency.processutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8f62292f-5719-4b19-9188-3715b94493a7/disk.config 8f62292f-5719-4b19-9188-3715b94493a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.816 280943 DEBUG nova.network.neutron [-] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.840 280943 INFO nova.compute.manager [-] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Took 2.95 seconds to deallocate network for instance.#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.923 280943 DEBUG oslo_concurrency.lockutils [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.924 280943 DEBUG oslo_concurrency.lockutils [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.928 280943 DEBUG oslo_concurrency.lockutils [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:39 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:39.934 262301 INFO neutron.agent.dhcp.agent [None req-8e74fbdd-0aad-47b4-8d31-b41df3ef41b6 - - - - - -] DHCP configuration for ports {'27d340a7-60a4-4a73-9f16-bae5ab3411da'} is completed#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.960 280943 DEBUG nova.network.neutron [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Updated VIF entry in instance network info cache for port a1846659-6b91-4156-9939-085b30454143. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.961 280943 DEBUG nova.network.neutron [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Updating instance_info_cache with network_info: [{"id": "a1846659-6b91-4156-9939-085b30454143", "address": "fa:16:3e:da:90:40", "network": {"id": "c5d88dfa-0db8-489e-a45a-e843e31a3b26", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1246892017-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "0497de4959b2494e8036eb39226430d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1846659-6b", "ovs_interfaceid": "a1846659-6b91-4156-9939-085b30454143", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.966 280943 INFO nova.scheduler.client.report [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Deleted allocations for instance 76d6f171-13c9-4730-8ed3-ab467ef6831a#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.977 280943 DEBUG oslo_concurrency.lockutils [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Releasing lock "refresh_cache-1148b5a9-4da9-491f-8952-80c4a965fe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.977 280943 DEBUG nova.compute.manager [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received event network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.978 280943 DEBUG oslo_concurrency.lockutils [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.984 280943 DEBUG oslo_concurrency.lockutils [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.984 280943 DEBUG oslo_concurrency.lockutils [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.985 280943 DEBUG nova.compute.manager [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] No waiting events found dispatching network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.985 280943 WARNING nova.compute.manager [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received unexpected event network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da for instance with vm_state active and task_state deleting.#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.987 280943 DEBUG nova.compute.manager [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Received event network-changed-737e82a6-2634-47df-b8a7-ec21a927cc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.988 280943 DEBUG nova.compute.manager [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Refreshing instance network info cache due to event network-changed-737e82a6-2634-47df-b8a7-ec21a927cc3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.988 280943 DEBUG oslo_concurrency.lockutils [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "refresh_cache-8f62292f-5719-4b19-9188-3715b94493a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.989 280943 DEBUG oslo_concurrency.lockutils [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquired lock "refresh_cache-8f62292f-5719-4b19-9188-3715b94493a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.989 280943 DEBUG nova.network.neutron [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Refreshing network info cache for port 737e82a6-2634-47df-b8a7-ec21a927cc3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.997 280943 DEBUG oslo_concurrency.processutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8f62292f-5719-4b19-9188-3715b94493a7/disk.config 8f62292f-5719-4b19-9188-3715b94493a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:39 localhost nova_compute[280939]: 2025-11-23 09:58:39.998 280943 INFO nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Deleting local config drive /var/lib/nova/instances/8f62292f-5719-4b19-9188-3715b94493a7/disk.config because it was imported into RBD.#033[00m Nov 23 04:58:40 localhost systemd[1]: Stopping User Manager for UID 42436... Nov 23 04:58:40 localhost systemd[309810]: Activating special unit Exit the Session... Nov 23 04:58:40 localhost systemd[309810]: Stopped target Main User Target. Nov 23 04:58:40 localhost systemd[309810]: Stopped target Basic System. Nov 23 04:58:40 localhost systemd[309810]: Stopped target Paths. Nov 23 04:58:40 localhost systemd[309810]: Stopped target Sockets. Nov 23 04:58:40 localhost systemd[309810]: Stopped target Timers. Nov 23 04:58:40 localhost systemd[309810]: Stopped Mark boot as successful after the user session has run 2 minutes. Nov 23 04:58:40 localhost systemd[309810]: Stopped Daily Cleanup of User's Temporary Directories. Nov 23 04:58:40 localhost systemd[309810]: Closed D-Bus User Message Bus Socket. Nov 23 04:58:40 localhost systemd[309810]: Stopped Create User's Volatile Files and Directories. Nov 23 04:58:40 localhost systemd[309810]: Removed slice User Application Slice. Nov 23 04:58:40 localhost systemd[309810]: Reached target Shutdown. Nov 23 04:58:40 localhost systemd[309810]: Finished Exit the Session. Nov 23 04:58:40 localhost systemd[309810]: Reached target Exit the Session. Nov 23 04:58:40 localhost systemd[1]: user@42436.service: Deactivated successfully. Nov 23 04:58:40 localhost systemd[1]: Stopped User Manager for UID 42436. Nov 23 04:58:40 localhost systemd[1]: Stopping User Runtime Directory /run/user/42436... Nov 23 04:58:40 localhost kernel: device tap737e82a6-26 entered promiscuous mode Nov 23 04:58:40 localhost NetworkManager[5966]: [1763891920.0830] manager: (tap737e82a6-26): new Tun device (/org/freedesktop/NetworkManager/Devices/22) Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.084 280943 DEBUG oslo_concurrency.lockutils [None req-cb3b5c3e-72bf-44e2-a968-b24e1d742f04 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 4.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:40 localhost ovn_controller[153771]: 2025-11-23T09:58:40Z|00076|binding|INFO|Claiming lport 737e82a6-2634-47df-b8a7-ec21a927cc3f for this chassis. Nov 23 04:58:40 localhost ovn_controller[153771]: 2025-11-23T09:58:40Z|00077|binding|INFO|737e82a6-2634-47df-b8a7-ec21a927cc3f: Claiming fa:16:3e:da:21:74 10.100.0.10 Nov 23 04:58:40 localhost ovn_controller[153771]: 2025-11-23T09:58:40Z|00078|binding|INFO|Claiming lport fd30dda9-c731-47dd-b319-ebcca717b708 for this chassis. Nov 23 04:58:40 localhost ovn_controller[153771]: 2025-11-23T09:58:40Z|00079|binding|INFO|fd30dda9-c731-47dd-b319-ebcca717b708: Claiming fa:16:3e:4f:95:ad 19.80.0.95 Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.087 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:40 localhost systemd-udevd[311069]: Network interface NamePolicy= disabled on kernel command line. Nov 23 04:58:40 localhost systemd[1]: run-user-42436.mount: Deactivated successfully. Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.098 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:95:ad 19.80.0.95'], port_security=['fa:16:3e:4f:95:ad 19.80.0.95'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['737e82a6-2634-47df-b8a7-ec21a927cc3f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1587702031', 'neutron:cidrs': '19.80.0.95/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-903951dd-448c-4453-aa24-f24a53269074', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1587702031', 'neutron:project_id': '253c88568a634476a6c1284eed6a9464', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a3350144-9b09-432b-a32e-ef84bb8bf494', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=154cf14d-e57b-4715-bce9-5bdd1a0ded15, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=fd30dda9-c731-47dd-b319-ebcca717b708) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.100 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:21:74 10.100.0.10'], port_security=['fa:16:3e:da:21:74 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1925970765', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8f62292f-5719-4b19-9188-3715b94493a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d679e465-8656-4403-afa0-724657d33ec4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1925970765', 'neutron:project_id': '253c88568a634476a6c1284eed6a9464', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a3350144-9b09-432b-a32e-ef84bb8bf494', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a90e812-f218-49cd-a3ab-6bc1317ad730, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=737e82a6-2634-47df-b8a7-ec21a927cc3f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.102 159415 INFO neutron.agent.ovn.metadata.agent [-] Port fd30dda9-c731-47dd-b319-ebcca717b708 in datapath 903951dd-448c-4453-aa24-f24a53269074 bound to our chassis#033[00m Nov 23 04:58:40 localhost systemd[1]: user-runtime-dir@42436.service: Deactivated successfully. Nov 23 04:58:40 localhost systemd[1]: Stopped User Runtime Directory /run/user/42436. Nov 23 04:58:40 localhost systemd[1]: Removed slice User Slice of UID 42436. Nov 23 04:58:40 localhost NetworkManager[5966]: [1763891920.1068] device (tap737e82a6-26): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Nov 23 04:58:40 localhost NetworkManager[5966]: [1763891920.1078] device (tap737e82a6-26): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.109 159415 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 903951dd-448c-4453-aa24-f24a53269074#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.120 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[6c72ff6c-8f4f-4546-acbe-2344830f3fd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.120 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap903951dd-41 in ovnmeta-903951dd-448c-4453-aa24-f24a53269074 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.122 308301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap903951dd-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.123 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed905d0-1a6b-4268-88e0-d8f85a0593a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.124 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[df2d4dd8-0946-4e81-8b2a-2ef0d4506a85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.124 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:40 localhost ovn_controller[153771]: 2025-11-23T09:58:40Z|00080|binding|INFO|Setting lport 737e82a6-2634-47df-b8a7-ec21a927cc3f ovn-installed in OVS Nov 23 04:58:40 localhost ovn_controller[153771]: 2025-11-23T09:58:40Z|00081|binding|INFO|Setting lport 737e82a6-2634-47df-b8a7-ec21a927cc3f up in Southbound Nov 23 04:58:40 localhost ovn_controller[153771]: 2025-11-23T09:58:40Z|00082|binding|INFO|Setting lport fd30dda9-c731-47dd-b319-ebcca717b708 up in Southbound Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.128 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.135 159521 DEBUG oslo.privsep.daemon [-] privsep: reply[eeada1ce-7801-47df-922f-8d334d21383b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:40 localhost systemd-machined[202731]: New machine qemu-3-instance-0000000a. Nov 23 04:58:40 localhost systemd[1]: Started Virtual Machine qemu-3-instance-0000000a. Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.147 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[e33c57e4-af08-4883-8e5e-61f4cdfadd03]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.173 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[427494a6-24f2-4ddb-b590-4de0b51e7be1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:40 localhost NetworkManager[5966]: [1763891920.1817] manager: (tap903951dd-40): new Veth device (/org/freedesktop/NetworkManager/Devices/23) Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.180 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[38433199-65cd-49d8-b3d5-311867925b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.215 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[ed48c939-73cf-4b7d-b976-5bbce61c4d9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.220 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[df0ce8e4-b5f4-42b0-a31c-ccfd54cfee9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:40 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap903951dd-41: link becomes ready Nov 23 04:58:40 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap903951dd-40: link becomes ready Nov 23 04:58:40 localhost NetworkManager[5966]: [1763891920.2402] device (tap903951dd-40): carrier: link connected Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.244 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[de73f23e-fc8a-4f17-b74a-362a8281725f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.264 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[c547f44c-bbc4-403a-b031-8a41b844599d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap903951dd-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:47:c5:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1185665, 'reachable_time': 23853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311121, 'error': None, 'target': 'ovnmeta-903951dd-448c-4453-aa24-f24a53269074', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.280 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[01950396-ac06-4066-9827-28dc1ee95f01]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:c591'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1185665, 'tstamp': 1185665}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311125, 'error': None, 'target': 'ovnmeta-903951dd-448c-4453-aa24-f24a53269074', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.310 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[8c200169-b833-427a-8afa-8074a4dc3646]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap903951dd-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:47:c5:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1185665, 'reachable_time': 23853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311133, 'error': None, 'target': 'ovnmeta-903951dd-448c-4453-aa24-f24a53269074', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.336 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[4cfa1dd0-5716-4692-8b49-f6836c552cc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.396 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[18786c49-0f30-470f-aa89-ef0dbfa66099]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.397 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap903951dd-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.398 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.399 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap903951dd-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:40 localhost kernel: device tap903951dd-40 entered promiscuous mode Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.440 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.444 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.445 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap903951dd-40, col_values=(('external_ids', {'iface-id': 'b83bb60d-d579-4f8d-9e2c-3885d238bb26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.448 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:40 localhost ovn_controller[153771]: 2025-11-23T09:58:40Z|00083|binding|INFO|Releasing lport b83bb60d-d579-4f8d-9e2c-3885d238bb26 from this chassis (sb_readonly=0) Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.452 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.453 159415 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/903951dd-448c-4453-aa24-f24a53269074.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/903951dd-448c-4453-aa24-f24a53269074.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.453 280943 DEBUG nova.virt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.454 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] VM Started (Lifecycle Event)#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.454 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[92181773-5468-4dc4-8b10-838dd0f3e85f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.455 159415 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: global Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: log /dev/log local0 debug Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: log-tag haproxy-metadata-proxy-903951dd-448c-4453-aa24-f24a53269074 Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: user root Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: group root Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: maxconn 1024 Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: pidfile /var/lib/neutron/external/pids/903951dd-448c-4453-aa24-f24a53269074.pid.haproxy Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: daemon Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: defaults Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: log global Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: mode http Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: option httplog Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: option dontlognull Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: option http-server-close Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: option forwardfor Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: retries 3 Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: timeout http-request 30s Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: timeout connect 30s Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: timeout client 32s Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: timeout server 32s Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: timeout http-keep-alive 30s Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: listen listener Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: bind 169.254.169.254:80 Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: server metadata /var/lib/neutron/metadata_proxy Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: http-request add-header X-OVN-Network-ID 903951dd-448c-4453-aa24-f24a53269074 Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Nov 23 04:58:40 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:40.458 159415 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-903951dd-448c-4453-aa24-f24a53269074', 'env', 'PROCESS_TAG=haproxy-903951dd-448c-4453-aa24-f24a53269074', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/903951dd-448c-4453-aa24-f24a53269074.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.462 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.475 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.482 280943 DEBUG nova.virt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.482 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] VM Paused (Lifecycle Event)#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.504 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.507 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.519 280943 DEBUG nova.network.neutron [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Updated VIF entry in instance network info cache for port 737e82a6-2634-47df-b8a7-ec21a927cc3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.519 280943 DEBUG nova.network.neutron [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Updating instance_info_cache with network_info: [{"id": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "address": "fa:16:3e:da:21:74", "network": {"id": "d679e465-8656-4403-afa0-724657d33ec4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-49202206-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "253c88568a634476a6c1284eed6a9464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap737e82a6-26", "ovs_interfaceid": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.540 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.543 280943 DEBUG oslo_concurrency.lockutils [req-3ffab985-cd4c-41ee-99b1-dd4248da0ced req-d337e17d-98b2-44e3-b26e-9a911b229e1e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Releasing lock "refresh_cache-8f62292f-5719-4b19-9188-3715b94493a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 23 04:58:40 localhost podman[311183]: Nov 23 04:58:40 localhost podman[311183]: 2025-11-23 09:58:40.875386384 +0000 UTC m=+0.092842881 container create 18b950ca0cbbaba13212bfe9af534230aa655b9f58b2f5046d926ef5cba353e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-903951dd-448c-4453-aa24-f24a53269074, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.895 280943 DEBUG nova.compute.manager [req-592a7716-215e-45a1-bdf6-15106e54567f req-0ae76a4b-fcd4-493e-9f20-b371c1b6920c b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Received event network-vif-plugged-737e82a6-2634-47df-b8a7-ec21a927cc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.896 280943 DEBUG oslo_concurrency.lockutils [req-592a7716-215e-45a1-bdf6-15106e54567f req-0ae76a4b-fcd4-493e-9f20-b371c1b6920c b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "8f62292f-5719-4b19-9188-3715b94493a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.896 280943 DEBUG oslo_concurrency.lockutils [req-592a7716-215e-45a1-bdf6-15106e54567f req-0ae76a4b-fcd4-493e-9f20-b371c1b6920c b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.897 280943 DEBUG oslo_concurrency.lockutils [req-592a7716-215e-45a1-bdf6-15106e54567f req-0ae76a4b-fcd4-493e-9f20-b371c1b6920c b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.897 280943 DEBUG nova.compute.manager [req-592a7716-215e-45a1-bdf6-15106e54567f req-0ae76a4b-fcd4-493e-9f20-b371c1b6920c b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Processing event network-vif-plugged-737e82a6-2634-47df-b8a7-ec21a927cc3f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.899 280943 DEBUG nova.compute.manager [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.902 280943 DEBUG nova.virt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.902 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] VM Resumed (Lifecycle Event)#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.905 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.909 280943 INFO nova.virt.libvirt.driver [-] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Instance spawned successfully.#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.910 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.923 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:58:40 localhost systemd[1]: Started libpod-conmon-18b950ca0cbbaba13212bfe9af534230aa655b9f58b2f5046d926ef5cba353e1.scope. Nov 23 04:58:40 localhost podman[311183]: 2025-11-23 09:58:40.829846806 +0000 UTC m=+0.047303303 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.936 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.938 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.939 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.940 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.941 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.942 280943 DEBUG nova.virt.libvirt.driver [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 23 04:58:40 localhost systemd[1]: tmp-crun.t0NW0h.mount: Deactivated successfully. Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.948 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Nov 23 04:58:40 localhost systemd[1]: Started libcrun container. Nov 23 04:58:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebac15deed2ac53f5fe366af83f53bf1f761ee18532bf8a4f5fdbcd87b331b60/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 04:58:40 localhost podman[311183]: 2025-11-23 09:58:40.980343566 +0000 UTC m=+0.197800073 container init 18b950ca0cbbaba13212bfe9af534230aa655b9f58b2f5046d926ef5cba353e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-903951dd-448c-4453-aa24-f24a53269074, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0) Nov 23 04:58:40 localhost nova_compute[280939]: 2025-11-23 09:58:40.984 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Nov 23 04:58:40 localhost podman[311183]: 2025-11-23 09:58:40.990528599 +0000 UTC m=+0.207985106 container start 18b950ca0cbbaba13212bfe9af534230aa655b9f58b2f5046d926ef5cba353e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-903951dd-448c-4453-aa24-f24a53269074, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118) Nov 23 04:58:41 localhost nova_compute[280939]: 2025-11-23 09:58:41.014 280943 INFO nova.compute.manager [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Took 6.56 seconds to spawn the instance on the hypervisor.#033[00m Nov 23 04:58:41 localhost nova_compute[280939]: 2025-11-23 09:58:41.014 280943 DEBUG nova.compute.manager [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:58:41 localhost neutron-haproxy-ovnmeta-903951dd-448c-4453-aa24-f24a53269074[311198]: [NOTICE] (311202) : New worker (311204) forked Nov 23 04:58:41 localhost neutron-haproxy-ovnmeta-903951dd-448c-4453-aa24-f24a53269074[311198]: [NOTICE] (311202) : Loading success. Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.055 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 737e82a6-2634-47df-b8a7-ec21a927cc3f in datapath d679e465-8656-4403-afa0-724657d33ec4 unbound from our chassis#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.060 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port a4ae488d-6e50-4466-beba-eaab4efb551d IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.060 159415 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d679e465-8656-4403-afa0-724657d33ec4#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.069 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e6b286-2ae0-45ad-ba4a-f724722e5bad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.070 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd679e465-81 in ovnmeta-d679e465-8656-4403-afa0-724657d33ec4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.073 308301 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd679e465-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.073 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[573c717c-2f11-4046-a73e-c3fec4974eeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.074 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[ff2a8807-6ed5-485b-9c6b-33027f29a7b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:41 localhost nova_compute[280939]: 2025-11-23 09:58:41.080 280943 INFO nova.compute.manager [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Took 8.29 seconds to build instance.#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.083 159521 DEBUG oslo.privsep.daemon [-] privsep: reply[b66e3fbd-451b-4cff-ab49-4b1a0b3be63d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:41 localhost nova_compute[280939]: 2025-11-23 09:58:41.095 280943 DEBUG oslo_concurrency.lockutils [None req-2f7420e7-5120-4530-9aaa-c195b70f8932 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 8.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.098 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[d6287be4-e504-43b7-82c9-aa30bf180da2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.128 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2a279e-b8dd-435f-857e-a4441d092d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:41 localhost NetworkManager[5966]: [1763891921.1396] manager: (tapd679e465-80): new Veth device (/org/freedesktop/NetworkManager/Devices/24) Nov 23 04:58:41 localhost systemd-udevd[311102]: Network interface NamePolicy= disabled on kernel command line. Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.140 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[966f189a-9160-424d-a4b5-71e352b0a232]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.175 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[316e1caa-5a9f-4780-9c18-dee2df2da0e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.179 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[ac82cd62-c265-4f49-8408-97f25f7872b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:41 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapd679e465-81: link becomes ready Nov 23 04:58:41 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapd679e465-80: link becomes ready Nov 23 04:58:41 localhost NetworkManager[5966]: [1763891921.2001] device (tapd679e465-80): carrier: link connected Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.204 310132 DEBUG oslo.privsep.daemon [-] privsep: reply[42a24119-cf41-4eee-aee2-3dec646be9f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.221 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[f52bd247-0771-43e6-8529-797e4208bc29]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd679e465-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:a8:02:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1185761, 'reachable_time': 28409, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311223, 'error': None, 'target': 'ovnmeta-d679e465-8656-4403-afa0-724657d33ec4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.234 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[e42bbc24-8284-48c2-addb-7194137950e3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:218'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1185761, 'tstamp': 1185761}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311224, 'error': None, 'target': 'ovnmeta-d679e465-8656-4403-afa0-724657d33ec4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.250 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[d82c7b27-944a-4cb6-8c31-02dc718530aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd679e465-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:a8:02:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1185761, 'reachable_time': 28409, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311225, 'error': None, 'target': 'ovnmeta-d679e465-8656-4403-afa0-724657d33ec4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.278 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[48a14038-a9ad-442f-82b2-9c037fa47240]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:41 localhost neutron_sriov_agent[255165]: 2025-11-23 09:58:41.286 2 INFO neutron.agent.securitygroups_rpc [None req-b6d2f56d-2805-44c2-9e36-7ffa8fc09e14 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Security group member updated ['ff44a28d-1e1f-4163-b206-fdf77022bf0b']#033[00m Nov 23 04:58:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v107: 177 pgs: 177 active+clean; 285 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 5.9 MiB/s rd, 5.7 MiB/s wr, 292 op/s Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.337 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[29474952-45b3-43bc-96ec-87c7f06d2c81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.340 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd679e465-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.341 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.342 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd679e465-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:41 localhost nova_compute[280939]: 2025-11-23 09:58:41.345 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:41 localhost kernel: device tapd679e465-80 entered promiscuous mode Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.353 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd679e465-80, col_values=(('external_ids', {'iface-id': '9b50ca15-3b72-42c0-998b-33441ea57460'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:41 localhost nova_compute[280939]: 2025-11-23 09:58:41.361 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.362 159415 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d679e465-8656-4403-afa0-724657d33ec4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d679e465-8656-4403-afa0-724657d33ec4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Nov 23 04:58:41 localhost ovn_controller[153771]: 2025-11-23T09:58:41Z|00084|binding|INFO|Releasing lport 9b50ca15-3b72-42c0-998b-33441ea57460 from this chassis (sb_readonly=0) Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.363 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e6f9b8-e969-4725-8127-a239998619ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.365 159415 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: global Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: log /dev/log local0 debug Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: log-tag haproxy-metadata-proxy-d679e465-8656-4403-afa0-724657d33ec4 Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: user root Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: group root Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: maxconn 1024 Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: pidfile /var/lib/neutron/external/pids/d679e465-8656-4403-afa0-724657d33ec4.pid.haproxy Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: daemon Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: defaults Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: log global Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: mode http Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: option httplog Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: option dontlognull Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: option http-server-close Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: option forwardfor Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: retries 3 Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: timeout http-request 30s Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: timeout connect 30s Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: timeout client 32s Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: timeout server 32s Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: timeout http-keep-alive 30s Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: listen listener Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: bind 169.254.169.254:80 Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: server metadata /var/lib/neutron/metadata_proxy Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: http-request add-header X-OVN-Network-ID d679e465-8656-4403-afa0-724657d33ec4 Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Nov 23 04:58:41 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:41.365 159415 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d679e465-8656-4403-afa0-724657d33ec4', 'env', 'PROCESS_TAG=haproxy-d679e465-8656-4403-afa0-724657d33ec4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d679e465-8656-4403-afa0-724657d33ec4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Nov 23 04:58:41 localhost nova_compute[280939]: 2025-11-23 09:58:41.376 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:41 localhost nova_compute[280939]: 2025-11-23 09:58:41.466 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:41 localhost podman[311257]: Nov 23 04:58:41 localhost podman[311257]: 2025-11-23 09:58:41.901714675 +0000 UTC m=+0.111494844 container create 385acfed6dc6cdd57a0dec76b6238ca7ad154602d354dca72299726a6bab4705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d679e465-8656-4403-afa0-724657d33ec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:58:41 localhost systemd[1]: Started libpod-conmon-385acfed6dc6cdd57a0dec76b6238ca7ad154602d354dca72299726a6bab4705.scope. Nov 23 04:58:41 localhost podman[311257]: 2025-11-23 09:58:41.84846268 +0000 UTC m=+0.058242849 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Nov 23 04:58:41 localhost systemd[1]: tmp-crun.L7P1CO.mount: Deactivated successfully. Nov 23 04:58:41 localhost systemd[1]: Started libcrun container. Nov 23 04:58:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e16593531749eb54343c61ebd96167d42535be65e346d9199e278c2a062e853/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 04:58:41 localhost podman[311257]: 2025-11-23 09:58:41.991649906 +0000 UTC m=+0.201430095 container init 385acfed6dc6cdd57a0dec76b6238ca7ad154602d354dca72299726a6bab4705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d679e465-8656-4403-afa0-724657d33ec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 04:58:42 localhost podman[311257]: 2025-11-23 09:58:42.002162739 +0000 UTC m=+0.211942908 container start 385acfed6dc6cdd57a0dec76b6238ca7ad154602d354dca72299726a6bab4705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d679e465-8656-4403-afa0-724657d33ec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:58:42 localhost neutron-haproxy-ovnmeta-d679e465-8656-4403-afa0-724657d33ec4[311271]: [NOTICE] (311275) : New worker (311277) forked Nov 23 04:58:42 localhost neutron-haproxy-ovnmeta-d679e465-8656-4403-afa0-724657d33ec4[311271]: [NOTICE] (311275) : Loading success. Nov 23 04:58:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:58:43 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:42.971 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:58:43 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:42.972 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 04:58:43 localhost nova_compute[280939]: 2025-11-23 09:58:43.006 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:43 localhost nova_compute[280939]: 2025-11-23 09:58:43.187 280943 DEBUG nova.compute.manager [req-67654682-82f2-4095-849f-23370bc750ae req-0c7af053-9b13-4a09-9f47-d93f2b7d0a77 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Received event network-vif-plugged-737e82a6-2634-47df-b8a7-ec21a927cc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:58:43 localhost nova_compute[280939]: 2025-11-23 09:58:43.188 280943 DEBUG oslo_concurrency.lockutils [req-67654682-82f2-4095-849f-23370bc750ae req-0c7af053-9b13-4a09-9f47-d93f2b7d0a77 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "8f62292f-5719-4b19-9188-3715b94493a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:43 localhost nova_compute[280939]: 2025-11-23 09:58:43.188 280943 DEBUG oslo_concurrency.lockutils [req-67654682-82f2-4095-849f-23370bc750ae req-0c7af053-9b13-4a09-9f47-d93f2b7d0a77 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:43 localhost nova_compute[280939]: 2025-11-23 09:58:43.189 280943 DEBUG oslo_concurrency.lockutils [req-67654682-82f2-4095-849f-23370bc750ae req-0c7af053-9b13-4a09-9f47-d93f2b7d0a77 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:43 localhost nova_compute[280939]: 2025-11-23 09:58:43.189 280943 DEBUG nova.compute.manager [req-67654682-82f2-4095-849f-23370bc750ae req-0c7af053-9b13-4a09-9f47-d93f2b7d0a77 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] No waiting events found dispatching network-vif-plugged-737e82a6-2634-47df-b8a7-ec21a927cc3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Nov 23 04:58:43 localhost nova_compute[280939]: 2025-11-23 09:58:43.190 280943 WARNING nova.compute.manager [req-67654682-82f2-4095-849f-23370bc750ae req-0c7af053-9b13-4a09-9f47-d93f2b7d0a77 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Received unexpected event network-vif-plugged-737e82a6-2634-47df-b8a7-ec21a927cc3f for instance with vm_state active and task_state migrating.#033[00m Nov 23 04:58:43 localhost neutron_sriov_agent[255165]: 2025-11-23 09:58:43.301 2 INFO neutron.agent.securitygroups_rpc [None req-50406108-6fe1-4d79-842a-8b928e46e646 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Security group member updated ['ff44a28d-1e1f-4163-b206-fdf77022bf0b']#033[00m Nov 23 04:58:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v108: 177 pgs: 177 active+clean; 285 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 5.9 MiB/s rd, 5.7 MiB/s wr, 292 op/s Nov 23 04:58:43 localhost dnsmasq[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/addn_hosts - 1 addresses Nov 23 04:58:43 localhost dnsmasq-dhcp[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/host Nov 23 04:58:43 localhost dnsmasq-dhcp[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/opts Nov 23 04:58:43 localhost podman[311303]: 2025-11-23 09:58:43.617311669 +0000 UTC m=+0.075081167 container kill 8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-81348c6d-951a-4399-8703-476056b57fe9, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 04:58:43 localhost nova_compute[280939]: 2025-11-23 09:58:43.892 280943 DEBUG nova.virt.libvirt.driver [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Check if temp file /var/lib/nova/instances/tmp2mwvq3bv exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m Nov 23 04:58:43 localhost nova_compute[280939]: 2025-11-23 09:58:43.893 280943 DEBUG nova.compute.manager [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] source check data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp2mwvq3bv',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='8f62292f-5719-4b19-9188-3715b94493a7',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m Nov 23 04:58:44 localhost nova_compute[280939]: 2025-11-23 09:58:44.229 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:44 localhost systemd[1]: tmp-crun.8ofYHK.mount: Deactivated successfully. Nov 23 04:58:44 localhost podman[311342]: 2025-11-23 09:58:44.409879502 +0000 UTC m=+0.053836634 container kill 8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-81348c6d-951a-4399-8703-476056b57fe9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 04:58:44 localhost dnsmasq[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/addn_hosts - 0 addresses Nov 23 04:58:44 localhost dnsmasq-dhcp[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/host Nov 23 04:58:44 localhost dnsmasq-dhcp[308770]: read /var/lib/neutron/dhcp/81348c6d-951a-4399-8703-476056b57fe9/opts Nov 23 04:58:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:58:44 localhost podman[311356]: 2025-11-23 09:58:44.500055281 +0000 UTC m=+0.073789857 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:58:44 localhost podman[311356]: 2025-11-23 09:58:44.510261824 +0000 UTC m=+0.083996340 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 04:58:44 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:58:44 localhost kernel: device tapa46670eb-74 left promiscuous mode Nov 23 04:58:44 localhost nova_compute[280939]: 2025-11-23 09:58:44.535 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:44 localhost ovn_controller[153771]: 2025-11-23T09:58:44Z|00085|binding|INFO|Releasing lport a46670eb-74db-4098-8d00-3a08a57da283 from this chassis (sb_readonly=0) Nov 23 04:58:44 localhost ovn_controller[153771]: 2025-11-23T09:58:44Z|00086|binding|INFO|Setting lport a46670eb-74db-4098-8d00-3a08a57da283 down in Southbound Nov 23 04:58:44 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:44.547 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-81348c6d-951a-4399-8703-476056b57fe9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81348c6d-951a-4399-8703-476056b57fe9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2148c18d8f24a6db12dc22c787e8b2e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1897b64f-0c37-45be-8353-f858f64309cd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a46670eb-74db-4098-8d00-3a08a57da283) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:58:44 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:44.548 159415 INFO neutron.agent.ovn.metadata.agent [-] Port a46670eb-74db-4098-8d00-3a08a57da283 in datapath 81348c6d-951a-4399-8703-476056b57fe9 unbound from our chassis#033[00m Nov 23 04:58:44 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:44.551 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81348c6d-951a-4399-8703-476056b57fe9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 04:58:44 localhost nova_compute[280939]: 2025-11-23 09:58:44.550 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:44 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:44.552 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[bdbd7a0b-5d69-488f-a6d9-8fa62be9be4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:44 localhost nova_compute[280939]: 2025-11-23 09:58:44.552 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v109: 177 pgs: 177 active+clean; 285 MiB data, 921 MiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 5.7 MiB/s wr, 365 op/s Nov 23 04:58:46 localhost nova_compute[280939]: 2025-11-23 09:58:46.131 280943 DEBUG nova.compute.manager [req-3c441442-0ae3-4882-b279-ed986a28d397 req-d2c54e18-7813-4df9-b774-b92100d5593a b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Received event network-vif-unplugged-737e82a6-2634-47df-b8a7-ec21a927cc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:58:46 localhost nova_compute[280939]: 2025-11-23 09:58:46.132 280943 DEBUG oslo_concurrency.lockutils [req-3c441442-0ae3-4882-b279-ed986a28d397 req-d2c54e18-7813-4df9-b774-b92100d5593a b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "8f62292f-5719-4b19-9188-3715b94493a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:46 localhost nova_compute[280939]: 2025-11-23 09:58:46.132 280943 DEBUG oslo_concurrency.lockutils [req-3c441442-0ae3-4882-b279-ed986a28d397 req-d2c54e18-7813-4df9-b774-b92100d5593a b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:46 localhost nova_compute[280939]: 2025-11-23 09:58:46.133 280943 DEBUG oslo_concurrency.lockutils [req-3c441442-0ae3-4882-b279-ed986a28d397 req-d2c54e18-7813-4df9-b774-b92100d5593a b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:46 localhost nova_compute[280939]: 2025-11-23 09:58:46.133 280943 DEBUG nova.compute.manager [req-3c441442-0ae3-4882-b279-ed986a28d397 req-d2c54e18-7813-4df9-b774-b92100d5593a b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] No waiting events found dispatching network-vif-unplugged-737e82a6-2634-47df-b8a7-ec21a927cc3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Nov 23 04:58:46 localhost nova_compute[280939]: 2025-11-23 09:58:46.133 280943 DEBUG nova.compute.manager [req-3c441442-0ae3-4882-b279-ed986a28d397 req-d2c54e18-7813-4df9-b774-b92100d5593a b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Received event network-vif-unplugged-737e82a6-2634-47df-b8a7-ec21a927cc3f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Nov 23 04:58:46 localhost nova_compute[280939]: 2025-11-23 09:58:46.469 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:47 localhost podman[239764]: time="2025-11-23T09:58:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:58:47 localhost podman[239764]: @ - - [23/Nov/2025:09:58:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 163531 "" "Go-http-client/1.1" Nov 23 04:58:47 localhost podman[239764]: @ - - [23/Nov/2025:09:58:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 21581 "" "Go-http-client/1.1" Nov 23 04:58:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v110: 177 pgs: 177 active+clean; 285 MiB data, 921 MiB used, 41 GiB / 42 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 265 op/s Nov 23 04:58:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:58:47 localhost ovn_controller[153771]: 2025-11-23T09:58:47Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:90:40 10.100.0.12 Nov 23 04:58:47 localhost ovn_controller[153771]: 2025-11-23T09:58:47Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:90:40 10.100.0.12 Nov 23 04:58:47 localhost nova_compute[280939]: 2025-11-23 09:58:47.693 280943 INFO nova.compute.manager [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Took 3.24 seconds for pre_live_migration on destination host np0005532585.localdomain.#033[00m Nov 23 04:58:47 localhost nova_compute[280939]: 2025-11-23 09:58:47.696 280943 DEBUG nova.compute.manager [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Nov 23 04:58:47 localhost nova_compute[280939]: 2025-11-23 09:58:47.716 280943 DEBUG nova.compute.manager [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp2mwvq3bv',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='8f62292f-5719-4b19-9188-3715b94493a7',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(fb898c10-d92a-4af4-b1bd-dff9da842a30),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m Nov 23 04:58:47 localhost nova_compute[280939]: 2025-11-23 09:58:47.722 280943 DEBUG nova.objects.instance [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Lazy-loading 'migration_context' on Instance uuid 8f62292f-5719-4b19-9188-3715b94493a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 23 04:58:47 localhost nova_compute[280939]: 2025-11-23 09:58:47.724 280943 DEBUG nova.virt.libvirt.driver [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m Nov 23 04:58:47 localhost nova_compute[280939]: 2025-11-23 09:58:47.727 280943 DEBUG nova.virt.libvirt.driver [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m Nov 23 04:58:47 localhost nova_compute[280939]: 2025-11-23 09:58:47.728 280943 DEBUG nova.virt.libvirt.driver [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m Nov 23 04:58:47 localhost nova_compute[280939]: 2025-11-23 09:58:47.742 280943 DEBUG nova.virt.libvirt.vif [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T09:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1576780525',display_name='tempest-LiveMigrationTest-server-1576780525',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005532584.localdomain',hostname='tempest-livemigrationtest-server-1576780525',id=10,image_ref='c5806483-57a8-4254-b41b-254b888c8606',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-11-23T09:58:41Z,launched_on='np0005532584.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005532584.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='253c88568a634476a6c1284eed6a9464',ramdisk_id='',reservation_id='r-dvg5v145',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c5806483-57a8-4254-b41b-254b888c8606',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1889456510',owner_user_name='tempest-LiveMigrationTest-1889456510-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-11-23T09:58:41Z,user_data=None,user_id='7f7875c0084c46fdb2e7b37e4fc44faf',uuid=8f62292f-5719-4b19-9188-3715b94493a7,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "address": "fa:16:3e:da:21:74", "network": {"id": "d679e465-8656-4403-afa0-724657d33ec4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-49202206-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "253c88568a634476a6c1284eed6a9464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap737e82a6-26", "ovs_interfaceid": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Nov 23 04:58:47 localhost nova_compute[280939]: 2025-11-23 09:58:47.743 280943 DEBUG nova.network.os_vif_util [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Converting VIF {"id": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "address": "fa:16:3e:da:21:74", "network": {"id": "d679e465-8656-4403-afa0-724657d33ec4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-49202206-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "253c88568a634476a6c1284eed6a9464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap737e82a6-26", "ovs_interfaceid": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Nov 23 04:58:47 localhost nova_compute[280939]: 2025-11-23 09:58:47.744 280943 DEBUG nova.network.os_vif_util [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:21:74,bridge_name='br-int',has_traffic_filtering=True,id=737e82a6-2634-47df-b8a7-ec21a927cc3f,network=Network(d679e465-8656-4403-afa0-724657d33ec4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap737e82a6-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Nov 23 04:58:47 localhost nova_compute[280939]: 2025-11-23 09:58:47.745 280943 DEBUG nova.virt.libvirt.migration [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Updating guest XML with vif config: Nov 23 04:58:47 localhost nova_compute[280939]: Nov 23 04:58:47 localhost nova_compute[280939]: Nov 23 04:58:47 localhost nova_compute[280939]: Nov 23 04:58:47 localhost nova_compute[280939]: Nov 23 04:58:47 localhost nova_compute[280939]: Nov 23 04:58:47 localhost nova_compute[280939]: Nov 23 04:58:47 localhost nova_compute[280939]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m Nov 23 04:58:47 localhost nova_compute[280939]: 2025-11-23 09:58:47.747 280943 DEBUG nova.virt.libvirt.driver [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m Nov 23 04:58:47 localhost ovn_controller[153771]: 2025-11-23T09:58:47Z|00087|binding|INFO|Releasing lport 9b50ca15-3b72-42c0-998b-33441ea57460 from this chassis (sb_readonly=0) Nov 23 04:58:47 localhost ovn_controller[153771]: 2025-11-23T09:58:47Z|00088|binding|INFO|Releasing lport a8a61203-fe2e-4005-bcf2-6150709eadea from this chassis (sb_readonly=0) Nov 23 04:58:47 localhost ovn_controller[153771]: 2025-11-23T09:58:47Z|00089|binding|INFO|Releasing lport b83bb60d-d579-4f8d-9e2c-3885d238bb26 from this chassis (sb_readonly=0) Nov 23 04:58:47 localhost nova_compute[280939]: 2025-11-23 09:58:47.867 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.187 280943 DEBUG nova.compute.manager [req-cecef6d7-a815-4196-a354-4a5da8d0d9ba req-87b26f2b-2920-4332-a243-5b0fbb9651f8 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Received event network-vif-plugged-737e82a6-2634-47df-b8a7-ec21a927cc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.188 280943 DEBUG oslo_concurrency.lockutils [req-cecef6d7-a815-4196-a354-4a5da8d0d9ba req-87b26f2b-2920-4332-a243-5b0fbb9651f8 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "8f62292f-5719-4b19-9188-3715b94493a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.188 280943 DEBUG oslo_concurrency.lockutils [req-cecef6d7-a815-4196-a354-4a5da8d0d9ba req-87b26f2b-2920-4332-a243-5b0fbb9651f8 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.188 280943 DEBUG oslo_concurrency.lockutils [req-cecef6d7-a815-4196-a354-4a5da8d0d9ba req-87b26f2b-2920-4332-a243-5b0fbb9651f8 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.189 280943 DEBUG nova.compute.manager [req-cecef6d7-a815-4196-a354-4a5da8d0d9ba req-87b26f2b-2920-4332-a243-5b0fbb9651f8 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] No waiting events found dispatching network-vif-plugged-737e82a6-2634-47df-b8a7-ec21a927cc3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.189 280943 WARNING nova.compute.manager [req-cecef6d7-a815-4196-a354-4a5da8d0d9ba req-87b26f2b-2920-4332-a243-5b0fbb9651f8 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Received unexpected event network-vif-plugged-737e82a6-2634-47df-b8a7-ec21a927cc3f for instance with vm_state active and task_state migrating.#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.189 280943 DEBUG nova.compute.manager [req-cecef6d7-a815-4196-a354-4a5da8d0d9ba req-87b26f2b-2920-4332-a243-5b0fbb9651f8 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Received event network-changed-737e82a6-2634-47df-b8a7-ec21a927cc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.190 280943 DEBUG nova.compute.manager [req-cecef6d7-a815-4196-a354-4a5da8d0d9ba req-87b26f2b-2920-4332-a243-5b0fbb9651f8 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Refreshing instance network info cache due to event network-changed-737e82a6-2634-47df-b8a7-ec21a927cc3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.190 280943 DEBUG oslo_concurrency.lockutils [req-cecef6d7-a815-4196-a354-4a5da8d0d9ba req-87b26f2b-2920-4332-a243-5b0fbb9651f8 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "refresh_cache-8f62292f-5719-4b19-9188-3715b94493a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.190 280943 DEBUG oslo_concurrency.lockutils [req-cecef6d7-a815-4196-a354-4a5da8d0d9ba req-87b26f2b-2920-4332-a243-5b0fbb9651f8 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquired lock "refresh_cache-8f62292f-5719-4b19-9188-3715b94493a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.190 280943 DEBUG nova.network.neutron [req-cecef6d7-a815-4196-a354-4a5da8d0d9ba req-87b26f2b-2920-4332-a243-5b0fbb9651f8 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Refreshing network info cache for port 737e82a6-2634-47df-b8a7-ec21a927cc3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.230 280943 DEBUG nova.virt.libvirt.migration [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.231 280943 INFO nova.virt.libvirt.migration [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.295 280943 INFO nova.virt.libvirt.driver [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m Nov 23 04:58:48 localhost systemd[1]: tmp-crun.l2ZnAJ.mount: Deactivated successfully. Nov 23 04:58:48 localhost dnsmasq[308770]: exiting on receipt of SIGTERM Nov 23 04:58:48 localhost podman[311398]: 2025-11-23 09:58:48.413966047 +0000 UTC m=+0.070691391 container kill 8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-81348c6d-951a-4399-8703-476056b57fe9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 23 04:58:48 localhost systemd[1]: libpod-8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447.scope: Deactivated successfully. Nov 23 04:58:48 localhost podman[311412]: 2025-11-23 09:58:48.488039021 +0000 UTC m=+0.064107439 container died 8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-81348c6d-951a-4399-8703-476056b57fe9, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true) Nov 23 04:58:48 localhost podman[311412]: 2025-11-23 09:58:48.52250314 +0000 UTC m=+0.098571518 container cleanup 8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-81348c6d-951a-4399-8703-476056b57fe9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:58:48 localhost systemd[1]: libpod-conmon-8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447.scope: Deactivated successfully. Nov 23 04:58:48 localhost podman[311419]: 2025-11-23 09:58:48.575731464 +0000 UTC m=+0.138207225 container remove 8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-81348c6d-951a-4399-8703-476056b57fe9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Nov 23 04:58:48 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:48.618 262301 INFO neutron.agent.dhcp.agent [None req-55aaae8c-c33c-45cf-bba0-ba6abe5d8195 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.807 280943 DEBUG nova.virt.libvirt.migration [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.808 280943 DEBUG nova.virt.libvirt.migration [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.945 280943 DEBUG nova.network.neutron [req-cecef6d7-a815-4196-a354-4a5da8d0d9ba req-87b26f2b-2920-4332-a243-5b0fbb9651f8 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Updated VIF entry in instance network info cache for port 737e82a6-2634-47df-b8a7-ec21a927cc3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.946 280943 DEBUG nova.network.neutron [req-cecef6d7-a815-4196-a354-4a5da8d0d9ba req-87b26f2b-2920-4332-a243-5b0fbb9651f8 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Updating instance_info_cache with network_info: [{"id": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "address": "fa:16:3e:da:21:74", "network": {"id": "d679e465-8656-4403-afa0-724657d33ec4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-49202206-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "253c88568a634476a6c1284eed6a9464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap737e82a6-26", "ovs_interfaceid": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "np0005532585.localdomain"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 23 04:58:48 localhost nova_compute[280939]: 2025-11-23 09:58:48.980 280943 DEBUG oslo_concurrency.lockutils [req-cecef6d7-a815-4196-a354-4a5da8d0d9ba req-87b26f2b-2920-4332-a243-5b0fbb9651f8 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Releasing lock "refresh_cache-8f62292f-5719-4b19-9188-3715b94493a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 23 04:58:48 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:48.989 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 04:58:49 localhost nova_compute[280939]: 2025-11-23 09:58:49.281 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:49 localhost nova_compute[280939]: 2025-11-23 09:58:49.312 280943 DEBUG nova.virt.libvirt.migration [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Nov 23 04:58:49 localhost nova_compute[280939]: 2025-11-23 09:58:49.313 280943 DEBUG nova.virt.libvirt.migration [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Nov 23 04:58:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v111: 177 pgs: 177 active+clean; 324 MiB data, 999 MiB used, 41 GiB / 42 GiB avail; 6.1 MiB/s rd, 4.9 MiB/s wr, 351 op/s Nov 23 04:58:49 localhost systemd[1]: var-lib-containers-storage-overlay-2e148735a585f418c075140eb26ac8b56c0b9707e535652ba24a9b590724b722-merged.mount: Deactivated successfully. Nov 23 04:58:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8b81db0b6c5b2240601c16a68fb833a216d37274ef735b19dd92902fcdeb7447-userdata-shm.mount: Deactivated successfully. Nov 23 04:58:49 localhost systemd[1]: run-netns-qdhcp\x2d81348c6d\x2d951a\x2d4399\x2d8703\x2d476056b57fe9.mount: Deactivated successfully. Nov 23 04:58:49 localhost nova_compute[280939]: 2025-11-23 09:58:49.426 280943 DEBUG nova.virt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 23 04:58:49 localhost nova_compute[280939]: 2025-11-23 09:58:49.427 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] VM Paused (Lifecycle Event)#033[00m Nov 23 04:58:49 localhost kernel: device tap737e82a6-26 left promiscuous mode Nov 23 04:58:49 localhost NetworkManager[5966]: [1763891929.5787] device (tap737e82a6-26): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Nov 23 04:58:49 localhost nova_compute[280939]: 2025-11-23 09:58:49.594 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:49 localhost ovn_controller[153771]: 2025-11-23T09:58:49Z|00090|binding|INFO|Releasing lport 737e82a6-2634-47df-b8a7-ec21a927cc3f from this chassis (sb_readonly=0) Nov 23 04:58:49 localhost ovn_controller[153771]: 2025-11-23T09:58:49Z|00091|binding|INFO|Setting lport 737e82a6-2634-47df-b8a7-ec21a927cc3f down in Southbound Nov 23 04:58:49 localhost ovn_controller[153771]: 2025-11-23T09:58:49Z|00092|binding|INFO|Releasing lport fd30dda9-c731-47dd-b319-ebcca717b708 from this chassis (sb_readonly=0) Nov 23 04:58:49 localhost ovn_controller[153771]: 2025-11-23T09:58:49Z|00093|binding|INFO|Setting lport fd30dda9-c731-47dd-b319-ebcca717b708 down in Southbound Nov 23 04:58:49 localhost ovn_controller[153771]: 2025-11-23T09:58:49Z|00094|binding|INFO|Removing iface tap737e82a6-26 ovn-installed in OVS Nov 23 04:58:49 localhost nova_compute[280939]: 2025-11-23 09:58:49.614 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:49 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d0000000a.scope: Deactivated successfully. Nov 23 04:58:49 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d0000000a.scope: Consumed 9.035s CPU time. Nov 23 04:58:49 localhost systemd-machined[202731]: Machine qemu-3-instance-0000000a terminated. Nov 23 04:58:49 localhost nova_compute[280939]: 2025-11-23 09:58:49.652 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:58:49 localhost ovn_controller[153771]: 2025-11-23T09:58:49Z|00095|binding|INFO|Releasing lport 9b50ca15-3b72-42c0-998b-33441ea57460 from this chassis (sb_readonly=0) Nov 23 04:58:49 localhost ovn_controller[153771]: 2025-11-23T09:58:49Z|00096|binding|INFO|Releasing lport a8a61203-fe2e-4005-bcf2-6150709eadea from this chassis (sb_readonly=0) Nov 23 04:58:49 localhost ovn_controller[153771]: 2025-11-23T09:58:49Z|00097|binding|INFO|Releasing lport b83bb60d-d579-4f8d-9e2c-3885d238bb26 from this chassis (sb_readonly=0) Nov 23 04:58:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:49.658 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:95:ad 19.80.0.95'], port_security=['fa:16:3e:4f:95:ad 19.80.0.95'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['737e82a6-2634-47df-b8a7-ec21a927cc3f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1587702031', 'neutron:cidrs': '19.80.0.95/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-903951dd-448c-4453-aa24-f24a53269074', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1587702031', 'neutron:project_id': '253c88568a634476a6c1284eed6a9464', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'a3350144-9b09-432b-a32e-ef84bb8bf494', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=154cf14d-e57b-4715-bce9-5bdd1a0ded15, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=fd30dda9-c731-47dd-b319-ebcca717b708) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:58:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:49.661 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:21:74 10.100.0.10'], port_security=['fa:16:3e:da:21:74 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain,np0005532585.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': '26f986a7-6ac7-4ec2-887b-8da6da04a661'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1925970765', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8f62292f-5719-4b19-9188-3715b94493a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d679e465-8656-4403-afa0-724657d33ec4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1925970765', 'neutron:project_id': '253c88568a634476a6c1284eed6a9464', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'a3350144-9b09-432b-a32e-ef84bb8bf494', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a90e812-f218-49cd-a3ab-6bc1317ad730, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=737e82a6-2634-47df-b8a7-ec21a927cc3f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:58:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:49.662 159415 INFO neutron.agent.ovn.metadata.agent [-] Port fd30dda9-c731-47dd-b319-ebcca717b708 in datapath 903951dd-448c-4453-aa24-f24a53269074 unbound from our chassis#033[00m Nov 23 04:58:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:49.665 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 903951dd-448c-4453-aa24-f24a53269074, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 04:58:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:49.668 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[98223d54-d9a5-48c0-991f-c01dcb7ee5ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:49 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:49.668 159415 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-903951dd-448c-4453-aa24-f24a53269074 namespace which is not needed anymore#033[00m Nov 23 04:58:49 localhost nova_compute[280939]: 2025-11-23 09:58:49.701 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:49 localhost journal[229251]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/8f62292f-5719-4b19-9188-3715b94493a7_disk: No such file or directory Nov 23 04:58:49 localhost journal[229251]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/8f62292f-5719-4b19-9188-3715b94493a7_disk: No such file or directory Nov 23 04:58:49 localhost nova_compute[280939]: 2025-11-23 09:58:49.763 280943 DEBUG nova.virt.libvirt.driver [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m Nov 23 04:58:49 localhost nova_compute[280939]: 2025-11-23 09:58:49.764 280943 DEBUG nova.virt.libvirt.driver [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m Nov 23 04:58:49 localhost nova_compute[280939]: 2025-11-23 09:58:49.764 280943 DEBUG nova.virt.libvirt.driver [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m Nov 23 04:58:49 localhost nova_compute[280939]: 2025-11-23 09:58:49.816 280943 DEBUG nova.virt.libvirt.guest [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '8f62292f-5719-4b19-9188-3715b94493a7' (instance-0000000a) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m Nov 23 04:58:49 localhost nova_compute[280939]: 2025-11-23 09:58:49.816 280943 INFO nova.virt.libvirt.driver [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Migration operation has completed#033[00m Nov 23 04:58:49 localhost nova_compute[280939]: 2025-11-23 09:58:49.817 280943 INFO nova.compute.manager [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] _post_live_migration() is started..#033[00m Nov 23 04:58:49 localhost systemd[1]: tmp-crun.tq9lxw.mount: Deactivated successfully. Nov 23 04:58:49 localhost neutron-haproxy-ovnmeta-903951dd-448c-4453-aa24-f24a53269074[311198]: [NOTICE] (311202) : haproxy version is 2.8.14-c23fe91 Nov 23 04:58:49 localhost neutron-haproxy-ovnmeta-903951dd-448c-4453-aa24-f24a53269074[311198]: [NOTICE] (311202) : path to executable is /usr/sbin/haproxy Nov 23 04:58:49 localhost neutron-haproxy-ovnmeta-903951dd-448c-4453-aa24-f24a53269074[311198]: [WARNING] (311202) : Exiting Master process... Nov 23 04:58:49 localhost neutron-haproxy-ovnmeta-903951dd-448c-4453-aa24-f24a53269074[311198]: [ALERT] (311202) : Current worker (311204) exited with code 143 (Terminated) Nov 23 04:58:49 localhost neutron-haproxy-ovnmeta-903951dd-448c-4453-aa24-f24a53269074[311198]: [WARNING] (311202) : All workers exited. Exiting... (0) Nov 23 04:58:49 localhost systemd[1]: libpod-18b950ca0cbbaba13212bfe9af534230aa655b9f58b2f5046d926ef5cba353e1.scope: Deactivated successfully. Nov 23 04:58:49 localhost podman[311478]: 2025-11-23 09:58:49.860068946 +0000 UTC m=+0.072815967 container died 18b950ca0cbbaba13212bfe9af534230aa655b9f58b2f5046d926ef5cba353e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-903951dd-448c-4453-aa24-f24a53269074, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:58:49 localhost podman[311478]: 2025-11-23 09:58:49.90123245 +0000 UTC m=+0.113979471 container cleanup 18b950ca0cbbaba13212bfe9af534230aa655b9f58b2f5046d926ef5cba353e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-903951dd-448c-4453-aa24-f24a53269074, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 23 04:58:49 localhost podman[311493]: 2025-11-23 09:58:49.927377272 +0000 UTC m=+0.060137547 container cleanup 18b950ca0cbbaba13212bfe9af534230aa655b9f58b2f5046d926ef5cba353e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-903951dd-448c-4453-aa24-f24a53269074, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0) Nov 23 04:58:49 localhost systemd[1]: libpod-conmon-18b950ca0cbbaba13212bfe9af534230aa655b9f58b2f5046d926ef5cba353e1.scope: Deactivated successfully. Nov 23 04:58:49 localhost podman[311508]: 2025-11-23 09:58:49.995199254 +0000 UTC m=+0.074124926 container remove 18b950ca0cbbaba13212bfe9af534230aa655b9f58b2f5046d926ef5cba353e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-903951dd-448c-4453-aa24-f24a53269074, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:49.999 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[90e7ca86-600c-4717-9b05-893ce44b584d]: (4, ('Sun Nov 23 09:58:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-903951dd-448c-4453-aa24-f24a53269074 (18b950ca0cbbaba13212bfe9af534230aa655b9f58b2f5046d926ef5cba353e1)\n18b950ca0cbbaba13212bfe9af534230aa655b9f58b2f5046d926ef5cba353e1\nSun Nov 23 09:58:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-903951dd-448c-4453-aa24-f24a53269074 (18b950ca0cbbaba13212bfe9af534230aa655b9f58b2f5046d926ef5cba353e1)\n18b950ca0cbbaba13212bfe9af534230aa655b9f58b2f5046d926ef5cba353e1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.001 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[c704ea14-b217-466b-8d4f-f09a899ccdcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.002 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap903951dd-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:50 localhost nova_compute[280939]: 2025-11-23 09:58:50.006 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:50 localhost kernel: device tap903951dd-40 left promiscuous mode Nov 23 04:58:50 localhost nova_compute[280939]: 2025-11-23 09:58:50.015 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.018 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[2a87834b-17da-46bf-873d-1880ec2895a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.034 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[5d64081c-a97c-4aa6-8e40-caf27d2c985a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.036 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf8dec3-747c-4157-bfac-ca674f834537]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.052 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[51997fa5-56b9-424a-832a-3090c879c10a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1185658, 'reachable_time': 40472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311531, 'error': None, 'target': 'ovnmeta-903951dd-448c-4453-aa24-f24a53269074', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.054 159521 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-903951dd-448c-4453-aa24-f24a53269074 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.054 159521 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee9a545-6c86-4a3f-bd24-d4166d59ee07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.055 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 737e82a6-2634-47df-b8a7-ec21a927cc3f in datapath d679e465-8656-4403-afa0-724657d33ec4 unbound from our chassis#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.058 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port a4ae488d-6e50-4466-beba-eaab4efb551d IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.059 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d679e465-8656-4403-afa0-724657d33ec4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.059 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec56f31-1936-40a6-a590-c5b479426aba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.060 159415 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d679e465-8656-4403-afa0-724657d33ec4 namespace which is not needed anymore#033[00m Nov 23 04:58:50 localhost neutron-haproxy-ovnmeta-d679e465-8656-4403-afa0-724657d33ec4[311271]: [NOTICE] (311275) : haproxy version is 2.8.14-c23fe91 Nov 23 04:58:50 localhost neutron-haproxy-ovnmeta-d679e465-8656-4403-afa0-724657d33ec4[311271]: [NOTICE] (311275) : path to executable is /usr/sbin/haproxy Nov 23 04:58:50 localhost neutron-haproxy-ovnmeta-d679e465-8656-4403-afa0-724657d33ec4[311271]: [WARNING] (311275) : Exiting Master process... Nov 23 04:58:50 localhost neutron-haproxy-ovnmeta-d679e465-8656-4403-afa0-724657d33ec4[311271]: [ALERT] (311275) : Current worker (311277) exited with code 143 (Terminated) Nov 23 04:58:50 localhost neutron-haproxy-ovnmeta-d679e465-8656-4403-afa0-724657d33ec4[311271]: [WARNING] (311275) : All workers exited. Exiting... (0) Nov 23 04:58:50 localhost systemd[1]: libpod-385acfed6dc6cdd57a0dec76b6238ca7ad154602d354dca72299726a6bab4705.scope: Deactivated successfully. Nov 23 04:58:50 localhost podman[311550]: 2025-11-23 09:58:50.253297869 +0000 UTC m=+0.072609770 container died 385acfed6dc6cdd57a0dec76b6238ca7ad154602d354dca72299726a6bab4705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d679e465-8656-4403-afa0-724657d33ec4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:58:50 localhost podman[311550]: 2025-11-23 09:58:50.295608409 +0000 UTC m=+0.114920280 container cleanup 385acfed6dc6cdd57a0dec76b6238ca7ad154602d354dca72299726a6bab4705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d679e465-8656-4403-afa0-724657d33ec4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:58:50 localhost podman[311563]: 2025-11-23 09:58:50.3256549 +0000 UTC m=+0.066485101 container cleanup 385acfed6dc6cdd57a0dec76b6238ca7ad154602d354dca72299726a6bab4705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d679e465-8656-4403-afa0-724657d33ec4, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true) Nov 23 04:58:50 localhost systemd[1]: libpod-conmon-385acfed6dc6cdd57a0dec76b6238ca7ad154602d354dca72299726a6bab4705.scope: Deactivated successfully. Nov 23 04:58:50 localhost podman[311579]: 2025-11-23 09:58:50.388691036 +0000 UTC m=+0.073744306 container remove 385acfed6dc6cdd57a0dec76b6238ca7ad154602d354dca72299726a6bab4705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d679e465-8656-4403-afa0-724657d33ec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.393 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[0de24afe-321d-4675-921f-9c2a899c584a]: (4, ('Sun Nov 23 09:58:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d679e465-8656-4403-afa0-724657d33ec4 (385acfed6dc6cdd57a0dec76b6238ca7ad154602d354dca72299726a6bab4705)\n385acfed6dc6cdd57a0dec76b6238ca7ad154602d354dca72299726a6bab4705\nSun Nov 23 09:58:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d679e465-8656-4403-afa0-724657d33ec4 (385acfed6dc6cdd57a0dec76b6238ca7ad154602d354dca72299726a6bab4705)\n385acfed6dc6cdd57a0dec76b6238ca7ad154602d354dca72299726a6bab4705\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.395 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[6505ead2-5fd8-4ba4-a5b4-27a17ba259ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:50 localhost systemd[1]: var-lib-containers-storage-overlay-7e16593531749eb54343c61ebd96167d42535be65e346d9199e278c2a062e853-merged.mount: Deactivated successfully. Nov 23 04:58:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-385acfed6dc6cdd57a0dec76b6238ca7ad154602d354dca72299726a6bab4705-userdata-shm.mount: Deactivated successfully. Nov 23 04:58:50 localhost systemd[1]: var-lib-containers-storage-overlay-ebac15deed2ac53f5fe366af83f53bf1f761ee18532bf8a4f5fdbcd87b331b60-merged.mount: Deactivated successfully. Nov 23 04:58:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-18b950ca0cbbaba13212bfe9af534230aa655b9f58b2f5046d926ef5cba353e1-userdata-shm.mount: Deactivated successfully. Nov 23 04:58:50 localhost systemd[1]: run-netns-ovnmeta\x2d903951dd\x2d448c\x2d4453\x2daa24\x2df24a53269074.mount: Deactivated successfully. Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.401 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd679e465-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:50 localhost nova_compute[280939]: 2025-11-23 09:58:50.443 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:50 localhost kernel: device tapd679e465-80 left promiscuous mode Nov 23 04:58:50 localhost nova_compute[280939]: 2025-11-23 09:58:50.456 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.458 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7b3415-17bb-44c3-898a-b593428a4d3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.479 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[2569f4c3-c308-4cf4-b3f1-48f0890f2888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.480 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[22925304-80ae-4782-8acf-4dab3dffa992]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.496 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[d2585099-65a2-42a7-a1a2-52adfc68f0ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1185753, 'reachable_time': 20339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311600, 'error': None, 'target': 'ovnmeta-d679e465-8656-4403-afa0-724657d33ec4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.501 159521 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d679e465-8656-4403-afa0-724657d33ec4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Nov 23 04:58:50 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:50.501 159521 DEBUG oslo.privsep.daemon [-] privsep: reply[dde04776-252f-4b5c-9a89-a9e38829244b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:50 localhost systemd[1]: run-netns-ovnmeta\x2dd679e465\x2d8656\x2d4403\x2dafa0\x2d724657d33ec4.mount: Deactivated successfully. Nov 23 04:58:50 localhost nova_compute[280939]: 2025-11-23 09:58:50.959 280943 DEBUG nova.compute.manager [req-34da2320-c4ab-4fe8-99fb-30690f73fec1 req-42671afa-781f-4439-941d-d655ef68e7b6 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Received event network-vif-unplugged-737e82a6-2634-47df-b8a7-ec21a927cc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:58:50 localhost nova_compute[280939]: 2025-11-23 09:58:50.960 280943 DEBUG oslo_concurrency.lockutils [req-34da2320-c4ab-4fe8-99fb-30690f73fec1 req-42671afa-781f-4439-941d-d655ef68e7b6 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "8f62292f-5719-4b19-9188-3715b94493a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:50 localhost nova_compute[280939]: 2025-11-23 09:58:50.960 280943 DEBUG oslo_concurrency.lockutils [req-34da2320-c4ab-4fe8-99fb-30690f73fec1 req-42671afa-781f-4439-941d-d655ef68e7b6 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:50 localhost nova_compute[280939]: 2025-11-23 09:58:50.961 280943 DEBUG oslo_concurrency.lockutils [req-34da2320-c4ab-4fe8-99fb-30690f73fec1 req-42671afa-781f-4439-941d-d655ef68e7b6 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:50 localhost nova_compute[280939]: 2025-11-23 09:58:50.961 280943 DEBUG nova.compute.manager [req-34da2320-c4ab-4fe8-99fb-30690f73fec1 req-42671afa-781f-4439-941d-d655ef68e7b6 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] No waiting events found dispatching network-vif-unplugged-737e82a6-2634-47df-b8a7-ec21a927cc3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Nov 23 04:58:50 localhost nova_compute[280939]: 2025-11-23 09:58:50.962 280943 DEBUG nova.compute.manager [req-34da2320-c4ab-4fe8-99fb-30690f73fec1 req-42671afa-781f-4439-941d-d655ef68e7b6 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Received event network-vif-unplugged-737e82a6-2634-47df-b8a7-ec21a927cc3f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Nov 23 04:58:50 localhost nova_compute[280939]: 2025-11-23 09:58:50.994 280943 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 23 04:58:50 localhost nova_compute[280939]: 2025-11-23 09:58:50.995 280943 INFO nova.compute.manager [-] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] VM Stopped (Lifecycle Event)#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.027 280943 DEBUG nova.compute.manager [None req-336d16ca-4196-4040-a16b-0fcf28136580 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:58:51 localhost podman[311617]: 2025-11-23 09:58:51.210305872 +0000 UTC m=+0.063245032 container kill a8f03ea8621bfb0a3711a12b210c8fb8d9fd297825fef1443227ba2836d938b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-549f38a9-abf8-434a-9d69-4d818ecbd4f9, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Nov 23 04:58:51 localhost dnsmasq[308969]: read /var/lib/neutron/dhcp/549f38a9-abf8-434a-9d69-4d818ecbd4f9/addn_hosts - 0 addresses Nov 23 04:58:51 localhost dnsmasq-dhcp[308969]: read /var/lib/neutron/dhcp/549f38a9-abf8-434a-9d69-4d818ecbd4f9/host Nov 23 04:58:51 localhost dnsmasq-dhcp[308969]: read /var/lib/neutron/dhcp/549f38a9-abf8-434a-9d69-4d818ecbd4f9/opts Nov 23 04:58:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:58:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:58:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:58:51 localhost podman[311633]: 2025-11-23 09:58:51.316665387 +0000 UTC m=+0.075737326 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:58:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v112: 177 pgs: 177 active+clean; 324 MiB data, 999 MiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 159 op/s Nov 23 04:58:51 localhost podman[311634]: 2025-11-23 09:58:51.33335074 +0000 UTC m=+0.087438856 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Nov 23 04:58:51 localhost podman[311633]: 2025-11-23 09:58:51.336308161 +0000 UTC m=+0.095380020 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 04:58:51 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:58:51 localhost podman[311634]: 2025-11-23 09:58:51.37110801 +0000 UTC m=+0.125196136 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2) Nov 23 04:58:51 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:58:51 localhost podman[311632]: 2025-11-23 09:58:51.424807498 +0000 UTC m=+0.183935878 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible) Nov 23 04:58:51 localhost podman[311632]: 2025-11-23 09:58:51.436880258 +0000 UTC m=+0.196008638 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Nov 23 04:58:51 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.502 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:51 localhost ovn_controller[153771]: 2025-11-23T09:58:51Z|00098|binding|INFO|Releasing lport acca5347-96e1-4029-9ae5-32051ef01ae8 from this chassis (sb_readonly=0) Nov 23 04:58:51 localhost kernel: device tapacca5347-96 left promiscuous mode Nov 23 04:58:51 localhost ovn_controller[153771]: 2025-11-23T09:58:51Z|00099|binding|INFO|Setting lport acca5347-96e1-4029-9ae5-32051ef01ae8 down in Southbound Nov 23 04:58:51 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:51.517 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-549f38a9-abf8-434a-9d69-4d818ecbd4f9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-549f38a9-abf8-434a-9d69-4d818ecbd4f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e0eb6249a0548c0ad772871741f0b5d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f5efb98-6ecb-4dff-9988-46a21664bf5f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=acca5347-96e1-4029-9ae5-32051ef01ae8) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:58:51 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:51.519 159415 INFO neutron.agent.ovn.metadata.agent [-] Port acca5347-96e1-4029-9ae5-32051ef01ae8 in datapath 549f38a9-abf8-434a-9d69-4d818ecbd4f9 unbound from our chassis#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.520 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:51 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:51.522 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 549f38a9-abf8-434a-9d69-4d818ecbd4f9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 04:58:51 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:51.523 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[24de9ed6-65e4-4603-b472-2b950f985b6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.652 280943 DEBUG nova.network.neutron [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Activated binding for port 737e82a6-2634-47df-b8a7-ec21a927cc3f and host np0005532585.localdomain migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.653 280943 DEBUG nova.compute.manager [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "address": "fa:16:3e:da:21:74", "network": {"id": "d679e465-8656-4403-afa0-724657d33ec4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-49202206-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "253c88568a634476a6c1284eed6a9464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap737e82a6-26", "ovs_interfaceid": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.654 280943 DEBUG nova.virt.libvirt.vif [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T09:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1576780525',display_name='tempest-LiveMigrationTest-server-1576780525',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005532584.localdomain',hostname='tempest-livemigrationtest-server-1576780525',id=10,image_ref='c5806483-57a8-4254-b41b-254b888c8606',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-11-23T09:58:41Z,launched_on='np0005532584.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005532584.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='253c88568a634476a6c1284eed6a9464',ramdisk_id='',reservation_id='r-dvg5v145',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c5806483-57a8-4254-b41b-254b888c8606',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1889456510',owner_user_name='tempest-LiveMigrationTest-1889456510-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-11-23T09:58:43Z,user_data=None,user_id='7f7875c0084c46fdb2e7b37e4fc44faf',uuid=8f62292f-5719-4b19-9188-3715b94493a7,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "address": "fa:16:3e:da:21:74", "network": {"id": "d679e465-8656-4403-afa0-724657d33ec4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-49202206-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "253c88568a634476a6c1284eed6a9464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap737e82a6-26", "ovs_interfaceid": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.654 280943 DEBUG nova.network.os_vif_util [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Converting VIF {"id": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "address": "fa:16:3e:da:21:74", "network": {"id": "d679e465-8656-4403-afa0-724657d33ec4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-49202206-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "253c88568a634476a6c1284eed6a9464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap737e82a6-26", "ovs_interfaceid": "737e82a6-2634-47df-b8a7-ec21a927cc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.655 280943 DEBUG nova.network.os_vif_util [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:21:74,bridge_name='br-int',has_traffic_filtering=True,id=737e82a6-2634-47df-b8a7-ec21a927cc3f,network=Network(d679e465-8656-4403-afa0-724657d33ec4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap737e82a6-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.656 280943 DEBUG os_vif [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:21:74,bridge_name='br-int',has_traffic_filtering=True,id=737e82a6-2634-47df-b8a7-ec21a927cc3f,network=Network(d679e465-8656-4403-afa0-724657d33ec4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap737e82a6-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.658 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.659 280943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap737e82a6-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.660 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.662 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.664 280943 INFO os_vif [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:21:74,bridge_name='br-int',has_traffic_filtering=True,id=737e82a6-2634-47df-b8a7-ec21a927cc3f,network=Network(d679e465-8656-4403-afa0-724657d33ec4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap737e82a6-26')#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.665 280943 DEBUG oslo_concurrency.lockutils [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.665 280943 DEBUG oslo_concurrency.lockutils [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.666 280943 DEBUG oslo_concurrency.lockutils [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.666 280943 DEBUG nova.compute.manager [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.666 280943 INFO nova.virt.libvirt.driver [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Deleting instance files /var/lib/nova/instances/8f62292f-5719-4b19-9188-3715b94493a7_del#033[00m Nov 23 04:58:51 localhost nova_compute[280939]: 2025-11-23 09:58:51.667 280943 INFO nova.virt.libvirt.driver [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Deletion of /var/lib/nova/instances/8f62292f-5719-4b19-9188-3715b94493a7_del complete#033[00m Nov 23 04:58:51 localhost ovn_metadata_agent[159410]: 2025-11-23 09:58:51.974 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:58:52 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:58:53 localhost nova_compute[280939]: 2025-11-23 09:58:53.000 280943 DEBUG nova.compute.manager [req-3f6d3399-ba43-45df-b923-f93af2769150 req-1fe94416-6013-4a93-9ac5-13ddb0512d5b b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Received event network-vif-plugged-737e82a6-2634-47df-b8a7-ec21a927cc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:58:53 localhost nova_compute[280939]: 2025-11-23 09:58:53.000 280943 DEBUG oslo_concurrency.lockutils [req-3f6d3399-ba43-45df-b923-f93af2769150 req-1fe94416-6013-4a93-9ac5-13ddb0512d5b b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "8f62292f-5719-4b19-9188-3715b94493a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:53 localhost nova_compute[280939]: 2025-11-23 09:58:53.001 280943 DEBUG oslo_concurrency.lockutils [req-3f6d3399-ba43-45df-b923-f93af2769150 req-1fe94416-6013-4a93-9ac5-13ddb0512d5b b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:53 localhost nova_compute[280939]: 2025-11-23 09:58:53.001 280943 DEBUG oslo_concurrency.lockutils [req-3f6d3399-ba43-45df-b923-f93af2769150 req-1fe94416-6013-4a93-9ac5-13ddb0512d5b b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:53 localhost nova_compute[280939]: 2025-11-23 09:58:53.001 280943 DEBUG nova.compute.manager [req-3f6d3399-ba43-45df-b923-f93af2769150 req-1fe94416-6013-4a93-9ac5-13ddb0512d5b b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] No waiting events found dispatching network-vif-plugged-737e82a6-2634-47df-b8a7-ec21a927cc3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Nov 23 04:58:53 localhost nova_compute[280939]: 2025-11-23 09:58:53.002 280943 WARNING nova.compute.manager [req-3f6d3399-ba43-45df-b923-f93af2769150 req-1fe94416-6013-4a93-9ac5-13ddb0512d5b b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Received unexpected event network-vif-plugged-737e82a6-2634-47df-b8a7-ec21a927cc3f for instance with vm_state active and task_state migrating.#033[00m Nov 23 04:58:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v113: 177 pgs: 177 active+clean; 324 MiB data, 999 MiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 159 op/s Nov 23 04:58:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:58:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:58:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:58:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:58:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:58:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:58:54 localhost ovn_controller[153771]: 2025-11-23T09:58:54Z|00100|binding|INFO|Releasing lport a8a61203-fe2e-4005-bcf2-6150709eadea from this chassis (sb_readonly=0) Nov 23 04:58:54 localhost nova_compute[280939]: 2025-11-23 09:58:54.289 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:54 localhost snmpd[67609]: empty variable list in _query Nov 23 04:58:54 localhost snmpd[67609]: empty variable list in _query Nov 23 04:58:54 localhost snmpd[67609]: empty variable list in _query Nov 23 04:58:54 localhost snmpd[67609]: empty variable list in _query Nov 23 04:58:54 localhost snmpd[67609]: empty variable list in _query Nov 23 04:58:54 localhost snmpd[67609]: empty variable list in _query Nov 23 04:58:54 localhost snmpd[67609]: empty variable list in _query Nov 23 04:58:54 localhost snmpd[67609]: empty variable list in _query Nov 23 04:58:54 localhost dnsmasq[308969]: exiting on receipt of SIGTERM Nov 23 04:58:54 localhost podman[311725]: 2025-11-23 09:58:54.631533851 +0000 UTC m=+0.045469897 container kill a8f03ea8621bfb0a3711a12b210c8fb8d9fd297825fef1443227ba2836d938b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-549f38a9-abf8-434a-9d69-4d818ecbd4f9, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 04:58:54 localhost systemd[1]: libpod-a8f03ea8621bfb0a3711a12b210c8fb8d9fd297825fef1443227ba2836d938b9.scope: Deactivated successfully. Nov 23 04:58:54 localhost podman[311738]: 2025-11-23 09:58:54.69987739 +0000 UTC m=+0.054798574 container died a8f03ea8621bfb0a3711a12b210c8fb8d9fd297825fef1443227ba2836d938b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-549f38a9-abf8-434a-9d69-4d818ecbd4f9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Nov 23 04:58:54 localhost systemd[1]: tmp-crun.Uairf5.mount: Deactivated successfully. Nov 23 04:58:54 localhost podman[311738]: 2025-11-23 09:58:54.740773485 +0000 UTC m=+0.095694639 container cleanup a8f03ea8621bfb0a3711a12b210c8fb8d9fd297825fef1443227ba2836d938b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-549f38a9-abf8-434a-9d69-4d818ecbd4f9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 23 04:58:54 localhost systemd[1]: libpod-conmon-a8f03ea8621bfb0a3711a12b210c8fb8d9fd297825fef1443227ba2836d938b9.scope: Deactivated successfully. Nov 23 04:58:54 localhost podman[311740]: 2025-11-23 09:58:54.778271387 +0000 UTC m=+0.126048651 container remove a8f03ea8621bfb0a3711a12b210c8fb8d9fd297825fef1443227ba2836d938b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-549f38a9-abf8-434a-9d69-4d818ecbd4f9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 04:58:54 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:54.799 262301 INFO neutron.agent.dhcp.agent [None req-f56a3a35-93a7-4f18-85e1-f01acf67c9dd - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 04:58:54 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:54.868 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 04:58:55 localhost nova_compute[280939]: 2025-11-23 09:58:55.322 280943 DEBUG oslo_concurrency.lockutils [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Acquiring lock "8f62292f-5719-4b19-9188-3715b94493a7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:55 localhost nova_compute[280939]: 2025-11-23 09:58:55.323 280943 DEBUG oslo_concurrency.lockutils [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:55 localhost nova_compute[280939]: 2025-11-23 09:58:55.324 280943 DEBUG oslo_concurrency.lockutils [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Lock "8f62292f-5719-4b19-9188-3715b94493a7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v114: 177 pgs: 177 active+clean; 359 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 5.1 MiB/s wr, 222 op/s Nov 23 04:58:55 localhost nova_compute[280939]: 2025-11-23 09:58:55.347 280943 DEBUG oslo_concurrency.lockutils [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:55 localhost nova_compute[280939]: 2025-11-23 09:58:55.347 280943 DEBUG oslo_concurrency.lockutils [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:55 localhost nova_compute[280939]: 2025-11-23 09:58:55.348 280943 DEBUG oslo_concurrency.lockutils [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:55 localhost nova_compute[280939]: 2025-11-23 09:58:55.348 280943 DEBUG nova.compute.resource_tracker [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:58:55 localhost nova_compute[280939]: 2025-11-23 09:58:55.349 280943 DEBUG oslo_concurrency.processutils [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:55 localhost systemd[1]: var-lib-containers-storage-overlay-33e89bb96670cdb05f6dfc2b3bccd242d493787fe55d0247dbc7ec6db9780e52-merged.mount: Deactivated successfully. Nov 23 04:58:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8f03ea8621bfb0a3711a12b210c8fb8d9fd297825fef1443227ba2836d938b9-userdata-shm.mount: Deactivated successfully. Nov 23 04:58:55 localhost systemd[1]: run-netns-qdhcp\x2d549f38a9\x2dabf8\x2d434a\x2d9d69\x2d4d818ecbd4f9.mount: Deactivated successfully. Nov 23 04:58:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:58:55 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3976893038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:58:55 localhost nova_compute[280939]: 2025-11-23 09:58:55.796 280943 DEBUG oslo_concurrency.processutils [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.053 280943 DEBUG nova.virt.libvirt.driver [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.054 280943 DEBUG nova.virt.libvirt.driver [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.279 280943 WARNING nova.virt.libvirt.driver [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.281 280943 DEBUG nova.compute.resource_tracker [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11404MB free_disk=41.46288299560547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.282 280943 DEBUG oslo_concurrency.lockutils [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.282 280943 DEBUG oslo_concurrency.lockutils [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.343 280943 DEBUG nova.compute.resource_tracker [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Migration for instance 8f62292f-5719-4b19-9188-3715b94493a7 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.367 280943 DEBUG nova.compute.resource_tracker [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.398 280943 DEBUG nova.compute.resource_tracker [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Instance 1148b5a9-4da9-491f-8952-80c4a965fe6b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.399 280943 DEBUG nova.compute.resource_tracker [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Migration fb898c10-d92a-4af4-b1bd-dff9da842a30 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.399 280943 DEBUG nova.compute.resource_tracker [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.400 280943 DEBUG nova.compute.resource_tracker [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.466 280943 DEBUG oslo_concurrency.processutils [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.548 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.661 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:58:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:58:56 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1794950639' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.939 280943 DEBUG oslo_concurrency.processutils [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.944 280943 DEBUG nova.compute.provider_tree [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.961 280943 DEBUG nova.scheduler.client.report [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.980 280943 DEBUG nova.compute.resource_tracker [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.980 280943 DEBUG oslo_concurrency.lockutils [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:58:56 localhost nova_compute[280939]: 2025-11-23 09:58:56.984 280943 INFO nova.compute.manager [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Migrating instance to np0005532585.localdomain finished successfully.#033[00m Nov 23 04:58:57 localhost nova_compute[280939]: 2025-11-23 09:58:57.150 280943 INFO nova.scheduler.client.report [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] Deleted allocation for migration fb898c10-d92a-4af4-b1bd-dff9da842a30#033[00m Nov 23 04:58:57 localhost nova_compute[280939]: 2025-11-23 09:58:57.151 280943 DEBUG nova.virt.libvirt.driver [None req-e0ec21a6-2619-4364-8f01-ea196be44fa4 ad1995ebc8334aa1bc1f8753f5df7d6f 57d9e088e75b4a3482d0e3a02bcce5be - - default default] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m Nov 23 04:58:57 localhost sshd[311812]: main: sshd: ssh-rsa algorithm is disabled Nov 23 04:58:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v115: 177 pgs: 177 active+clean; 359 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 707 KiB/s rd, 5.1 MiB/s wr, 148 op/s Nov 23 04:58:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:58:58 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:58.008 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:58:24Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=737e82a6-2634-47df-b8a7-ec21a927cc3f, ip_allocation=immediate, mac_address=fa:16:3e:da:21:74, name=tempest-parent-1925970765, network_id=d679e465-8656-4403-afa0-724657d33ec4, port_security_enabled=True, project_id=253c88568a634476a6c1284eed6a9464, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=14, security_groups=['a3350144-9b09-432b-a32e-ef84bb8bf494'], standard_attr_id=645, status=DOWN, tags=[], tenant_id=253c88568a634476a6c1284eed6a9464, trunk_details=sub_ports=[], trunk_id=c4a0969a-aee9-4b3f-bd50-6138befdbf0e, updated_at=2025-11-23T09:58:57Z on network d679e465-8656-4403-afa0-724657d33ec4#033[00m Nov 23 04:58:58 localhost dnsmasq[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/addn_hosts - 2 addresses Nov 23 04:58:58 localhost dnsmasq-dhcp[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/host Nov 23 04:58:58 localhost dnsmasq-dhcp[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/opts Nov 23 04:58:58 localhost podman[311831]: 2025-11-23 09:58:58.220159362 +0000 UTC m=+0.064887413 container kill 5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d679e465-8656-4403-afa0-724657d33ec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118) Nov 23 04:58:58 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:58:58.475 262301 INFO neutron.agent.dhcp.agent [None req-12eb5ea1-7c67-4f5c-aa70-aa27a8f11404 - - - - - -] DHCP configuration for ports {'737e82a6-2634-47df-b8a7-ec21a927cc3f'} is completed#033[00m Nov 23 04:58:59 localhost neutron_sriov_agent[255165]: 2025-11-23 09:58:59.120 2 INFO neutron.agent.securitygroups_rpc [None req-f36f5a6d-ca31-44d9-bac1-0308580f3e95 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Security group member updated ['a3350144-9b09-432b-a32e-ef84bb8bf494']#033[00m Nov 23 04:58:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v116: 177 pgs: 177 active+clean; 304 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 1.0 MiB/s rd, 6.4 MiB/s wr, 224 op/s Nov 23 04:58:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:58:59 localhost podman[311854]: 2025-11-23 09:58:59.896400928 +0000 UTC m=+0.082689271 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-type=git) Nov 23 04:58:59 localhost podman[311854]: 2025-11-23 09:58:59.913471152 +0000 UTC m=+0.099759535 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, distribution-scope=public) Nov 23 04:58:59 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:59:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 04:59:00 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2510104653' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 04:59:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 04:59:00 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2510104653' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 04:59:00 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:00.639 2 INFO neutron.agent.securitygroups_rpc [None req-2e659d4a-74ef-46b7-bd3b-2baf8d6d13fe 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Security group member updated ['a3350144-9b09-432b-a32e-ef84bb8bf494']#033[00m Nov 23 04:59:00 localhost dnsmasq[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/addn_hosts - 1 addresses Nov 23 04:59:00 localhost dnsmasq-dhcp[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/host Nov 23 04:59:00 localhost podman[311892]: 2025-11-23 09:59:00.850735207 +0000 UTC m=+0.052071879 container kill 5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d679e465-8656-4403-afa0-724657d33ec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Nov 23 04:59:00 localhost dnsmasq-dhcp[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/opts Nov 23 04:59:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v117: 177 pgs: 177 active+clean; 304 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 655 KiB/s rd, 3.4 MiB/s wr, 139 op/s Nov 23 04:59:01 localhost nova_compute[280939]: 2025-11-23 09:59:01.549 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:01 localhost nova_compute[280939]: 2025-11-23 09:59:01.663 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:01 localhost dnsmasq[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/addn_hosts - 0 addresses Nov 23 04:59:01 localhost dnsmasq-dhcp[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/host Nov 23 04:59:01 localhost podman[311931]: 2025-11-23 09:59:01.869512067 +0000 UTC m=+0.061917892 container kill 5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d679e465-8656-4403-afa0-724657d33ec4, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118) Nov 23 04:59:01 localhost dnsmasq-dhcp[309314]: read /var/lib/neutron/dhcp/d679e465-8656-4403-afa0-724657d33ec4/opts Nov 23 04:59:02 localhost ovn_controller[153771]: 2025-11-23T09:59:02Z|00101|binding|INFO|Releasing lport 3f6ffc5e-50ad-4f62-8a0b-f8f2aa579eac from this chassis (sb_readonly=0) Nov 23 04:59:02 localhost ovn_controller[153771]: 2025-11-23T09:59:02Z|00102|binding|INFO|Setting lport 3f6ffc5e-50ad-4f62-8a0b-f8f2aa579eac down in Southbound Nov 23 04:59:02 localhost nova_compute[280939]: 2025-11-23 09:59:02.075 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:02 localhost kernel: device tap3f6ffc5e-50 left promiscuous mode Nov 23 04:59:02 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:02.094 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-d679e465-8656-4403-afa0-724657d33ec4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d679e465-8656-4403-afa0-724657d33ec4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '253c88568a634476a6c1284eed6a9464', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a90e812-f218-49cd-a3ab-6bc1317ad730, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3f6ffc5e-50ad-4f62-8a0b-f8f2aa579eac) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:59:02 localhost nova_compute[280939]: 2025-11-23 09:59:02.095 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:02 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:02.097 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 3f6ffc5e-50ad-4f62-8a0b-f8f2aa579eac in datapath d679e465-8656-4403-afa0-724657d33ec4 unbound from our chassis#033[00m Nov 23 04:59:02 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:02.099 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d679e465-8656-4403-afa0-724657d33ec4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 04:59:02 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:02.100 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[526252f9-c404-4b4c-b0db-25f2cbeaf2ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:59:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:59:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e92 do_prune osdmap full prune enabled Nov 23 04:59:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e93 e93: 6 total, 6 up, 6 in Nov 23 04:59:02 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e93: 6 total, 6 up, 6 in Nov 23 04:59:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v119: 177 pgs: 177 active+clean; 304 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 786 KiB/s rd, 4.0 MiB/s wr, 166 op/s Nov 23 04:59:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:59:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:59:03 localhost podman[311956]: 2025-11-23 09:59:03.899663338 +0000 UTC m=+0.079907605 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:59:03 localhost podman[311956]: 2025-11-23 09:59:03.911321936 +0000 UTC m=+0.091566193 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 04:59:03 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:59:03 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:03.942 2 INFO neutron.agent.securitygroups_rpc [None req-9d56fe05-4ef8-4d55-837b-9cee7fc5dad7 4b677b000abe4b0687ff1afcd1016893 2a693c1f03094401b2a83bfa038e2d85 - - default default] Security group member updated ['e11e3507-78f9-4b55-80fe-2aa7bb5d486d']#033[00m Nov 23 04:59:03 localhost podman[311955]: 2025-11-23 09:59:03.958746932 +0000 UTC m=+0.141364752 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:59:03 localhost podman[311955]: 2025-11-23 09:59:03.974381381 +0000 UTC m=+0.156999191 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute) Nov 23 04:59:03 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:59:04 localhost ovn_controller[153771]: 2025-11-23T09:59:04Z|00103|binding|INFO|Releasing lport a8a61203-fe2e-4005-bcf2-6150709eadea from this chassis (sb_readonly=0) Nov 23 04:59:04 localhost nova_compute[280939]: 2025-11-23 09:59:04.331 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e93 do_prune osdmap full prune enabled Nov 23 04:59:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e94 e94: 6 total, 6 up, 6 in Nov 23 04:59:04 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e94: 6 total, 6 up, 6 in Nov 23 04:59:04 localhost nova_compute[280939]: 2025-11-23 09:59:04.752 280943 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 23 04:59:04 localhost nova_compute[280939]: 2025-11-23 09:59:04.752 280943 INFO nova.compute.manager [-] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] VM Stopped (Lifecycle Event)#033[00m Nov 23 04:59:04 localhost nova_compute[280939]: 2025-11-23 09:59:04.773 280943 DEBUG nova.compute.manager [None req-1a17e350-87af-4872-a0b7-f87e84dbf0ef - - - - - -] [instance: 8f62292f-5719-4b19-9188-3715b94493a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:59:04 localhost dnsmasq[309314]: exiting on receipt of SIGTERM Nov 23 04:59:04 localhost podman[312012]: 2025-11-23 09:59:04.907447069 +0000 UTC m=+0.059962422 container kill 5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d679e465-8656-4403-afa0-724657d33ec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:59:04 localhost systemd[1]: libpod-5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473.scope: Deactivated successfully. Nov 23 04:59:04 localhost podman[312028]: 2025-11-23 09:59:04.981675748 +0000 UTC m=+0.053338559 container died 5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d679e465-8656-4403-afa0-724657d33ec4, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:59:04 localhost systemd[1]: tmp-crun.EiE6dN.mount: Deactivated successfully. Nov 23 04:59:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473-userdata-shm.mount: Deactivated successfully. Nov 23 04:59:05 localhost podman[312028]: 2025-11-23 09:59:05.031021353 +0000 UTC m=+0.102684124 container remove 5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d679e465-8656-4403-afa0-724657d33ec4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:59:05 localhost systemd[1]: libpod-conmon-5cd2df160e7eeb69b2a505636ea15eb982df7b07def0d2e04ac9b17e76cb8473.scope: Deactivated successfully. Nov 23 04:59:05 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:05.060 262301 INFO neutron.agent.dhcp.agent [None req-aa92ed8b-e84b-40b1-9d2b-5bc0c5ef56f5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 04:59:05 localhost nova_compute[280939]: 2025-11-23 09:59:05.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:59:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v121: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 6.3 MiB/s rd, 7.8 MiB/s wr, 217 op/s Nov 23 04:59:05 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:05.519 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 04:59:05 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e94 do_prune osdmap full prune enabled Nov 23 04:59:05 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e95 e95: 6 total, 6 up, 6 in Nov 23 04:59:05 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e95: 6 total, 6 up, 6 in Nov 23 04:59:05 localhost systemd[1]: var-lib-containers-storage-overlay-fbc87de287b85c9211b7becc819793dcba1621a8b3227ff3622d2ce5fb11b42c-merged.mount: Deactivated successfully. Nov 23 04:59:05 localhost systemd[1]: run-netns-qdhcp\x2dd679e465\x2d8656\x2d4403\x2dafa0\x2d724657d33ec4.mount: Deactivated successfully. Nov 23 04:59:06 localhost nova_compute[280939]: 2025-11-23 09:59:06.700 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:06 localhost openstack_network_exporter[241732]: ERROR 09:59:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:59:06 localhost openstack_network_exporter[241732]: ERROR 09:59:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:59:06 localhost openstack_network_exporter[241732]: ERROR 09:59:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:59:06 localhost openstack_network_exporter[241732]: ERROR 09:59:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:59:06 localhost openstack_network_exporter[241732]: Nov 23 04:59:06 localhost openstack_network_exporter[241732]: ERROR 09:59:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:59:06 localhost openstack_network_exporter[241732]: Nov 23 04:59:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v123: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 138 op/s Nov 23 04:59:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:59:08 localhost nova_compute[280939]: 2025-11-23 09:59:08.134 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:59:08 localhost nova_compute[280939]: 2025-11-23 09:59:08.134 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:59:08 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:08.348 2 INFO neutron.agent.securitygroups_rpc [None req-0bae4724-8fa8-4216-8cb0-34bcdfbbc61a 4b677b000abe4b0687ff1afcd1016893 2a693c1f03094401b2a83bfa038e2d85 - - default default] Security group member updated ['e11e3507-78f9-4b55-80fe-2aa7bb5d486d']#033[00m Nov 23 04:59:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e95 do_prune osdmap full prune enabled Nov 23 04:59:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e96 e96: 6 total, 6 up, 6 in Nov 23 04:59:08 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e96: 6 total, 6 up, 6 in Nov 23 04:59:08 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:08.964 159516 DEBUG eventlet.wsgi.server [-] (159516) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Nov 23 04:59:08 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:08.967 159516 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015 Nov 23 04:59:08 localhost ovn_metadata_agent[159410]: Accept: */*#015 Nov 23 04:59:08 localhost ovn_metadata_agent[159410]: Connection: close#015 Nov 23 04:59:08 localhost ovn_metadata_agent[159410]: Content-Type: text/plain#015 Nov 23 04:59:08 localhost ovn_metadata_agent[159410]: Host: 169.254.169.254#015 Nov 23 04:59:08 localhost ovn_metadata_agent[159410]: User-Agent: curl/7.84.0#015 Nov 23 04:59:08 localhost ovn_metadata_agent[159410]: X-Forwarded-For: 10.100.0.12#015 Nov 23 04:59:08 localhost ovn_metadata_agent[159410]: X-Ovn-Network-Id: c5d88dfa-0db8-489e-a45a-e843e31a3b26 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Nov 23 04:59:09 localhost ovn_controller[153771]: 2025-11-23T09:59:09Z|00104|binding|INFO|Releasing lport a8a61203-fe2e-4005-bcf2-6150709eadea from this chassis (sb_readonly=0) Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.306 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v125: 177 pgs: 177 active+clean; 304 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 266 op/s Nov 23 04:59:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:09.403 159516 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Nov 23 04:59:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:09.403 159516 INFO eventlet.wsgi.server [-] 10.100.0.12, "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200 len: 1671 time: 0.4372890#033[00m Nov 23 04:59:09 localhost haproxy-metadata-proxy-c5d88dfa-0db8-489e-a45a-e843e31a3b26[310270]: 10.100.0.12:60114 [23/Nov/2025:09:59:08.963] listener listener/metadata 0/0/0/440/440 200 1655 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1" Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.535 280943 DEBUG oslo_concurrency.lockutils [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Acquiring lock "1148b5a9-4da9-491f-8952-80c4a965fe6b" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.535 280943 DEBUG oslo_concurrency.lockutils [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lock "1148b5a9-4da9-491f-8952-80c4a965fe6b" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.536 280943 DEBUG oslo_concurrency.lockutils [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Acquiring lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.536 280943 DEBUG oslo_concurrency.lockutils [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.537 280943 DEBUG oslo_concurrency.lockutils [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.538 280943 INFO nova.compute.manager [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Terminating instance#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.540 280943 DEBUG nova.compute.manager [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Nov 23 04:59:09 localhost kernel: device tapa1846659-6b left promiscuous mode Nov 23 04:59:09 localhost NetworkManager[5966]: [1763891949.6113] device (tapa1846659-6b): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Nov 23 04:59:09 localhost ovn_controller[153771]: 2025-11-23T09:59:09Z|00105|binding|INFO|Releasing lport a1846659-6b91-4156-9939-085b30454143 from this chassis (sb_readonly=0) Nov 23 04:59:09 localhost ovn_controller[153771]: 2025-11-23T09:59:09Z|00106|binding|INFO|Setting lport a1846659-6b91-4156-9939-085b30454143 down in Southbound Nov 23 04:59:09 localhost ovn_controller[153771]: 2025-11-23T09:59:09Z|00107|binding|INFO|Removing iface tapa1846659-6b ovn-installed in OVS Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.627 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:09 localhost ovn_controller[153771]: 2025-11-23T09:59:09Z|00108|ovn_bfd|INFO|Disabled BFD on interface ovn-c237ed-0 Nov 23 04:59:09 localhost ovn_controller[153771]: 2025-11-23T09:59:09Z|00109|ovn_bfd|INFO|Disabled BFD on interface ovn-49b8a0-0 Nov 23 04:59:09 localhost ovn_controller[153771]: 2025-11-23T09:59:09Z|00110|ovn_bfd|INFO|Disabled BFD on interface ovn-b7d5b3-0 Nov 23 04:59:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:09.636 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:90:40 10.100.0.12'], port_security=['fa:16:3e:da:90:40 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1148b5a9-4da9-491f-8952-80c4a965fe6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5d88dfa-0db8-489e-a45a-e843e31a3b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0497de4959b2494e8036eb39226430d6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2da1104f-77c5-475e-b21f-e52710edc8b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=54e00d1b-ba48-40e5-8228-7e38f918fa79, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=a1846659-6b91-4156-9939-085b30454143) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:59:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:09.637 159415 INFO neutron.agent.ovn.metadata.agent [-] Port a1846659-6b91-4156-9939-085b30454143 in datapath c5d88dfa-0db8-489e-a45a-e843e31a3b26 unbound from our chassis#033[00m Nov 23 04:59:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:09.639 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c5d88dfa-0db8-489e-a45a-e843e31a3b26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 04:59:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:09.640 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[236984ff-a391-448c-a912-5080f86c7363]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:59:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:09.641 159415 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26 namespace which is not needed anymore#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.647 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:09 localhost ovn_controller[153771]: 2025-11-23T09:59:09Z|00111|binding|INFO|Releasing lport a8a61203-fe2e-4005-bcf2-6150709eadea from this chassis (sb_readonly=0) Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.654 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:09 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000008.scope: Deactivated successfully. Nov 23 04:59:09 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000008.scope: Consumed 13.957s CPU time. Nov 23 04:59:09 localhost systemd-machined[202731]: Machine qemu-2-instance-00000008 terminated. Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.715 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:09 localhost ovn_controller[153771]: 2025-11-23T09:59:09Z|00112|binding|INFO|Releasing lport a8a61203-fe2e-4005-bcf2-6150709eadea from this chassis (sb_readonly=0) Nov 23 04:59:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e96 do_prune osdmap full prune enabled Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.731 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:09.741 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:59:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:09.741 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:59:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:09.741 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:59:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e97 e97: 6 total, 6 up, 6 in Nov 23 04:59:09 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e97: 6 total, 6 up, 6 in Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.775 280943 INFO nova.virt.libvirt.driver [-] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Instance destroyed successfully.#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.776 280943 DEBUG nova.objects.instance [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lazy-loading 'resources' on Instance uuid 1148b5a9-4da9-491f-8952-80c4a965fe6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.787 280943 DEBUG nova.virt.libvirt.vif [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T09:58:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005532584.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=8,image_ref='c5806483-57a8-4254-b41b-254b888c8606',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7mkCBPEi7Dn/CBb8dKZmrfYWwMHpR6NvmRrgxeBvUuyX/aX8ONpvOK4sr/zvPyTz4T6NWXcMIu46JjJEnGSD+WDnEZHOWGkiVTo1TEgHUJg/fGAuwlF+wJ6Nu4MyBm5w==',key_name='tempest-keypair-974278285',keypairs=,launch_index=0,launched_at=2025-11-23T09:58:34Z,launched_on='np0005532584.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005532584.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='0497de4959b2494e8036eb39226430d6',ramdisk_id='',reservation_id='r-cm4mi548',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c5806483-57a8-4254-b41b-254b888c8606',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-1110187156',owner_user_name='tempest-ServersV294TestFqdnHostnames-1110187156-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2025-11-23T09:58:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='492e2909a77a4032ab6c29a26d12fb14',uuid=1148b5a9-4da9-491f-8952-80c4a965fe6b,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a1846659-6b91-4156-9939-085b30454143", "address": "fa:16:3e:da:90:40", "network": {"id": "c5d88dfa-0db8-489e-a45a-e843e31a3b26", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1246892017-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "0497de4959b2494e8036eb39226430d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1846659-6b", "ovs_interfaceid": "a1846659-6b91-4156-9939-085b30454143", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.788 280943 DEBUG nova.network.os_vif_util [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Converting VIF {"id": "a1846659-6b91-4156-9939-085b30454143", "address": "fa:16:3e:da:90:40", "network": {"id": "c5d88dfa-0db8-489e-a45a-e843e31a3b26", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1246892017-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "0497de4959b2494e8036eb39226430d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1846659-6b", "ovs_interfaceid": "a1846659-6b91-4156-9939-085b30454143", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.789 280943 DEBUG nova.network.os_vif_util [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:90:40,bridge_name='br-int',has_traffic_filtering=True,id=a1846659-6b91-4156-9939-085b30454143,network=Network(c5d88dfa-0db8-489e-a45a-e843e31a3b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1846659-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.790 280943 DEBUG os_vif [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:90:40,bridge_name='br-int',has_traffic_filtering=True,id=a1846659-6b91-4156-9939-085b30454143,network=Network(c5d88dfa-0db8-489e-a45a-e843e31a3b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1846659-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.792 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.793 280943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1846659-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.795 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.797 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.800 280943 INFO os_vif [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:90:40,bridge_name='br-int',has_traffic_filtering=True,id=a1846659-6b91-4156-9939-085b30454143,network=Network(c5d88dfa-0db8-489e-a45a-e843e31a3b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1846659-6b')#033[00m Nov 23 04:59:09 localhost neutron-haproxy-ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26[310237]: [NOTICE] (310255) : haproxy version is 2.8.14-c23fe91 Nov 23 04:59:09 localhost neutron-haproxy-ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26[310237]: [NOTICE] (310255) : path to executable is /usr/sbin/haproxy Nov 23 04:59:09 localhost neutron-haproxy-ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26[310237]: [ALERT] (310255) : Current worker (310270) exited with code 143 (Terminated) Nov 23 04:59:09 localhost neutron-haproxy-ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26[310237]: [WARNING] (310255) : All workers exited. Exiting... (0) Nov 23 04:59:09 localhost systemd[1]: libpod-2c7bdb8727a12b8b13822841b8ee719d77f4d8184b3486c0369f85a323aff9fc.scope: Deactivated successfully. Nov 23 04:59:09 localhost podman[312080]: 2025-11-23 09:59:09.840283469 +0000 UTC m=+0.076022925 container died 2c7bdb8727a12b8b13822841b8ee719d77f4d8184b3486c0369f85a323aff9fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Nov 23 04:59:09 localhost podman[312080]: 2025-11-23 09:59:09.885664413 +0000 UTC m=+0.121403819 container cleanup 2c7bdb8727a12b8b13822841b8ee719d77f4d8184b3486c0369f85a323aff9fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 23 04:59:09 localhost podman[312118]: 2025-11-23 09:59:09.918015736 +0000 UTC m=+0.062734357 container cleanup 2c7bdb8727a12b8b13822841b8ee719d77f4d8184b3486c0369f85a323aff9fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Nov 23 04:59:09 localhost systemd[1]: libpod-conmon-2c7bdb8727a12b8b13822841b8ee719d77f4d8184b3486c0369f85a323aff9fc.scope: Deactivated successfully. Nov 23 04:59:09 localhost podman[312136]: 2025-11-23 09:59:09.973545781 +0000 UTC m=+0.072393514 container remove 2c7bdb8727a12b8b13822841b8ee719d77f4d8184b3486c0369f85a323aff9fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118) Nov 23 04:59:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:09.978 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[fb45889b-3f83-471d-9a47-f77bc883f912]: (4, ('Sun Nov 23 09:59:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26 (2c7bdb8727a12b8b13822841b8ee719d77f4d8184b3486c0369f85a323aff9fc)\n2c7bdb8727a12b8b13822841b8ee719d77f4d8184b3486c0369f85a323aff9fc\nSun Nov 23 09:59:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26 (2c7bdb8727a12b8b13822841b8ee719d77f4d8184b3486c0369f85a323aff9fc)\n2c7bdb8727a12b8b13822841b8ee719d77f4d8184b3486c0369f85a323aff9fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:59:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:09.980 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[62e6f46d-438b-4715-b26d-58245a97c1e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:59:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:09.981 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5d88dfa-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.983 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:09 localhost kernel: device tapc5d88dfa-00 left promiscuous mode Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.985 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:09 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:09.989 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[f5e61bc0-95c9-49a3-a811-c75a59e25dcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:59:09 localhost nova_compute[280939]: 2025-11-23 09:59:09.991 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:10 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:10.008 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[35079a10-6587-4e3d-b807-053d445d5d80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:59:10 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:10.010 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a54b1b-f744-47cc-a527-88da0b6d71eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:59:10 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:10.027 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[c81f4a91-0693-42e3-829c-434d29f5a7da]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1184878, 'reachable_time': 30446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312154, 'error': None, 'target': 'ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:59:10 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:10.029 159521 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c5d88dfa-0db8-489e-a45a-e843e31a3b26 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Nov 23 04:59:10 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:10.029 159521 DEBUG oslo.privsep.daemon [-] privsep: reply[9a553e33-0926-4d62-8bf2-0ca36581748f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.158 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.159 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.159 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.456 280943 INFO nova.virt.libvirt.driver [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Deleting instance files /var/lib/nova/instances/1148b5a9-4da9-491f-8952-80c4a965fe6b_del#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.457 280943 INFO nova.virt.libvirt.driver [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Deletion of /var/lib/nova/instances/1148b5a9-4da9-491f-8952-80c4a965fe6b_del complete#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.525 280943 INFO nova.compute.manager [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.525 280943 DEBUG oslo.service.loopingcall [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.526 280943 DEBUG nova.compute.manager [-] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.526 280943 DEBUG nova.network.neutron [-] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.625 280943 DEBUG nova.compute.manager [req-8f8ff001-bd1c-462e-9f2d-17302f998f88 req-dc82ad25-d7c1-4e03-a70d-563c95f2c3eb b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Received event network-vif-unplugged-a1846659-6b91-4156-9939-085b30454143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.625 280943 DEBUG oslo_concurrency.lockutils [req-8f8ff001-bd1c-462e-9f2d-17302f998f88 req-dc82ad25-d7c1-4e03-a70d-563c95f2c3eb b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.626 280943 DEBUG oslo_concurrency.lockutils [req-8f8ff001-bd1c-462e-9f2d-17302f998f88 req-dc82ad25-d7c1-4e03-a70d-563c95f2c3eb b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.626 280943 DEBUG oslo_concurrency.lockutils [req-8f8ff001-bd1c-462e-9f2d-17302f998f88 req-dc82ad25-d7c1-4e03-a70d-563c95f2c3eb b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.626 280943 DEBUG nova.compute.manager [req-8f8ff001-bd1c-462e-9f2d-17302f998f88 req-dc82ad25-d7c1-4e03-a70d-563c95f2c3eb b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] No waiting events found dispatching network-vif-unplugged-a1846659-6b91-4156-9939-085b30454143 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Nov 23 04:59:10 localhost nova_compute[280939]: 2025-11-23 09:59:10.626 280943 DEBUG nova.compute.manager [req-8f8ff001-bd1c-462e-9f2d-17302f998f88 req-dc82ad25-d7c1-4e03-a70d-563c95f2c3eb b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Received event network-vif-unplugged-a1846659-6b91-4156-9939-085b30454143 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Nov 23 04:59:10 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e97 do_prune osdmap full prune enabled Nov 23 04:59:10 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e98 e98: 6 total, 6 up, 6 in Nov 23 04:59:10 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e98: 6 total, 6 up, 6 in Nov 23 04:59:10 localhost systemd[1]: var-lib-containers-storage-overlay-88bb9ab892f16eabbcae1078842070edcaf0dbe52499824bd79be78f3af4a965-merged.mount: Deactivated successfully. Nov 23 04:59:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c7bdb8727a12b8b13822841b8ee719d77f4d8184b3486c0369f85a323aff9fc-userdata-shm.mount: Deactivated successfully. Nov 23 04:59:10 localhost systemd[1]: run-netns-ovnmeta\x2dc5d88dfa\x2d0db8\x2d489e\x2da45a\x2de843e31a3b26.mount: Deactivated successfully. Nov 23 04:59:11 localhost nova_compute[280939]: 2025-11-23 09:59:11.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:59:11 localhost nova_compute[280939]: 2025-11-23 09:59:11.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 04:59:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v128: 177 pgs: 177 active+clean; 304 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 8.3 KiB/s wr, 133 op/s Nov 23 04:59:11 localhost nova_compute[280939]: 2025-11-23 09:59:11.701 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:11 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:11.979 262301 INFO neutron.agent.linux.ip_lib [None req-b684a9fa-97f2-4303-863a-da8a2988e4c9 - - - - - -] Device tapb8c03e45-40 cannot be used as it has no MAC address#033[00m Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.002 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:12 localhost kernel: device tapb8c03e45-40 entered promiscuous mode Nov 23 04:59:12 localhost systemd-udevd[312056]: Network interface NamePolicy= disabled on kernel command line. Nov 23 04:59:12 localhost NetworkManager[5966]: [1763891952.0094] manager: (tapb8c03e45-40): new Generic device (/org/freedesktop/NetworkManager/Devices/25) Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.008 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:12 localhost ovn_controller[153771]: 2025-11-23T09:59:12Z|00113|binding|INFO|Claiming lport b8c03e45-4095-4592-9111-02ed44b19cad for this chassis. Nov 23 04:59:12 localhost ovn_controller[153771]: 2025-11-23T09:59:12Z|00114|binding|INFO|b8c03e45-4095-4592-9111-02ed44b19cad: Claiming unknown Nov 23 04:59:12 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:12.022 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-79425885-b636-4c43-b7ab-b1f8779b709d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79425885-b636-4c43-b7ab-b1f8779b709d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1b218a298814f81811d06e4ddeeca2f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e61ca71-2ede-46ab-977e-dbd69bc63a1f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b8c03e45-4095-4592-9111-02ed44b19cad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:59:12 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:12.024 159415 INFO neutron.agent.ovn.metadata.agent [-] Port b8c03e45-4095-4592-9111-02ed44b19cad in datapath 79425885-b636-4c43-b7ab-b1f8779b709d bound to our chassis#033[00m Nov 23 04:59:12 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:12.025 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 79425885-b636-4c43-b7ab-b1f8779b709d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 04:59:12 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:12.030 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[2c5a88c6-f839-4070-af2d-e3549a76e791]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:59:12 localhost ovn_controller[153771]: 2025-11-23T09:59:12Z|00115|binding|INFO|Setting lport b8c03e45-4095-4592-9111-02ed44b19cad ovn-installed in OVS Nov 23 04:59:12 localhost ovn_controller[153771]: 2025-11-23T09:59:12Z|00116|binding|INFO|Setting lport b8c03e45-4095-4592-9111-02ed44b19cad up in Southbound Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.061 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.102 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.126 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.129 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:59:12 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:12.394 2 INFO neutron.agent.securitygroups_rpc [req-a16da276-a12d-4c8d-9117-64c33f913ca9 req-9efe9caa-9e38-4b50-8b4e-539fa928addc 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Security group member updated ['2da1104f-77c5-475e-b21f-e52710edc8b5']#033[00m Nov 23 04:59:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:59:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e98 do_prune osdmap full prune enabled Nov 23 04:59:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e99 e99: 6 total, 6 up, 6 in Nov 23 04:59:12 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e99: 6 total, 6 up, 6 in Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 09:59:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.636 280943 DEBUG nova.network.neutron [-] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.668 280943 INFO nova.compute.manager [-] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Took 2.14 seconds to deallocate network for instance.#033[00m Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.687 280943 DEBUG nova.compute.manager [req-f1d85bef-858c-4e47-a753-7f940bbd42ca req-5ee5d1c8-ee34-4164-ae78-c0645200e527 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Received event network-vif-plugged-a1846659-6b91-4156-9939-085b30454143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.688 280943 DEBUG oslo_concurrency.lockutils [req-f1d85bef-858c-4e47-a753-7f940bbd42ca req-5ee5d1c8-ee34-4164-ae78-c0645200e527 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.689 280943 DEBUG oslo_concurrency.lockutils [req-f1d85bef-858c-4e47-a753-7f940bbd42ca req-5ee5d1c8-ee34-4164-ae78-c0645200e527 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.690 280943 DEBUG oslo_concurrency.lockutils [req-f1d85bef-858c-4e47-a753-7f940bbd42ca req-5ee5d1c8-ee34-4164-ae78-c0645200e527 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "1148b5a9-4da9-491f-8952-80c4a965fe6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.690 280943 DEBUG nova.compute.manager [req-f1d85bef-858c-4e47-a753-7f940bbd42ca req-5ee5d1c8-ee34-4164-ae78-c0645200e527 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] No waiting events found dispatching network-vif-plugged-a1846659-6b91-4156-9939-085b30454143 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.691 280943 WARNING nova.compute.manager [req-f1d85bef-858c-4e47-a753-7f940bbd42ca req-5ee5d1c8-ee34-4164-ae78-c0645200e527 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Received unexpected event network-vif-plugged-a1846659-6b91-4156-9939-085b30454143 for instance with vm_state active and task_state deleting.#033[00m Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.705 280943 DEBUG nova.compute.manager [req-85d68435-fe39-4cce-9fc4-16201a8cc4f1 req-78eb9bdd-915c-47f7-950a-392f5d0e577b b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Received event network-vif-deleted-a1846659-6b91-4156-9939-085b30454143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.710 280943 DEBUG oslo_concurrency.lockutils [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.711 280943 DEBUG oslo_concurrency.lockutils [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:59:12 localhost nova_compute[280939]: 2025-11-23 09:59:12.778 280943 DEBUG oslo_concurrency.processutils [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:59:12 localhost podman[312220]: Nov 23 04:59:12 localhost podman[312220]: 2025-11-23 09:59:12.997895377 +0000 UTC m=+0.096507385 container create 88a8d502fbe46cafe3c5b5ea1c9454477a48bb39b41aced4e656e82156e54a22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79425885-b636-4c43-b7ab-b1f8779b709d, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 04:59:13 localhost systemd[1]: Started libpod-conmon-88a8d502fbe46cafe3c5b5ea1c9454477a48bb39b41aced4e656e82156e54a22.scope. Nov 23 04:59:13 localhost podman[312220]: 2025-11-23 09:59:12.95470673 +0000 UTC m=+0.053318768 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 04:59:13 localhost systemd[1]: Started libcrun container. Nov 23 04:59:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18ce08d112e50bd1c0db5816e3bbbd0ab22081b68a61f655319b07599cd17d46/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 04:59:13 localhost podman[312220]: 2025-11-23 09:59:13.074154268 +0000 UTC m=+0.172766286 container init 88a8d502fbe46cafe3c5b5ea1c9454477a48bb39b41aced4e656e82156e54a22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79425885-b636-4c43-b7ab-b1f8779b709d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 23 04:59:13 localhost podman[312220]: 2025-11-23 09:59:13.08369849 +0000 UTC m=+0.182310498 container start 88a8d502fbe46cafe3c5b5ea1c9454477a48bb39b41aced4e656e82156e54a22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79425885-b636-4c43-b7ab-b1f8779b709d, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 23 04:59:13 localhost dnsmasq[312257]: started, version 2.85 cachesize 150 Nov 23 04:59:13 localhost dnsmasq[312257]: DNS service limited to local subnets Nov 23 04:59:13 localhost dnsmasq[312257]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 04:59:13 localhost dnsmasq[312257]: warning: no upstream servers configured Nov 23 04:59:13 localhost dnsmasq-dhcp[312257]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 04:59:13 localhost dnsmasq[312257]: read /var/lib/neutron/dhcp/79425885-b636-4c43-b7ab-b1f8779b709d/addn_hosts - 0 addresses Nov 23 04:59:13 localhost dnsmasq-dhcp[312257]: read /var/lib/neutron/dhcp/79425885-b636-4c43-b7ab-b1f8779b709d/host Nov 23 04:59:13 localhost dnsmasq-dhcp[312257]: read /var/lib/neutron/dhcp/79425885-b636-4c43-b7ab-b1f8779b709d/opts Nov 23 04:59:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:59:13 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3568239067' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:59:13 localhost nova_compute[280939]: 2025-11-23 09:59:13.279 280943 DEBUG oslo_concurrency.processutils [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:59:13 localhost nova_compute[280939]: 2025-11-23 09:59:13.285 280943 DEBUG nova.compute.provider_tree [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:59:13 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:13.292 262301 INFO neutron.agent.dhcp.agent [None req-ab6fe5aa-ffef-4eea-a6df-48dd790e3c9c - - - - - -] DHCP configuration for ports {'d90fc4e9-19fb-429c-be00-693789efcdfa'} is completed#033[00m Nov 23 04:59:13 localhost nova_compute[280939]: 2025-11-23 09:59:13.309 280943 DEBUG nova.scheduler.client.report [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:59:13 localhost nova_compute[280939]: 2025-11-23 09:59:13.328 280943 DEBUG oslo_concurrency.lockutils [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:59:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v130: 177 pgs: 177 active+clean; 304 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 115 KiB/s rd, 9.9 KiB/s wr, 158 op/s Nov 23 04:59:13 localhost nova_compute[280939]: 2025-11-23 09:59:13.367 280943 INFO nova.scheduler.client.report [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Deleted allocations for instance 1148b5a9-4da9-491f-8952-80c4a965fe6b#033[00m Nov 23 04:59:13 localhost nova_compute[280939]: 2025-11-23 09:59:13.421 280943 DEBUG oslo_concurrency.lockutils [None req-a16da276-a12d-4c8d-9117-64c33f913ca9 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Lock "1148b5a9-4da9-491f-8952-80c4a965fe6b" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 3.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:59:14 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e99 do_prune osdmap full prune enabled Nov 23 04:59:14 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e100 e100: 6 total, 6 up, 6 in Nov 23 04:59:14 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e100: 6 total, 6 up, 6 in Nov 23 04:59:14 localhost nova_compute[280939]: 2025-11-23 09:59:14.795 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:59:14 localhost podman[312260]: 2025-11-23 09:59:14.8969202 +0000 UTC m=+0.083097142 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Nov 23 04:59:14 localhost podman[312260]: 2025-11-23 09:59:14.90767637 +0000 UTC m=+0.093853322 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS) Nov 23 04:59:14 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:59:15 localhost nova_compute[280939]: 2025-11-23 09:59:15.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:59:15 localhost nova_compute[280939]: 2025-11-23 09:59:15.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:59:15 localhost nova_compute[280939]: 2025-11-23 09:59:15.328 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:59:15 localhost nova_compute[280939]: 2025-11-23 09:59:15.329 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:59:15 localhost nova_compute[280939]: 2025-11-23 09:59:15.329 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:59:15 localhost nova_compute[280939]: 2025-11-23 09:59:15.330 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 04:59:15 localhost nova_compute[280939]: 2025-11-23 09:59:15.330 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:59:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v132: 177 pgs: 177 active+clean; 304 MiB data, 1020 MiB used, 41 GiB / 42 GiB avail; 8.6 MiB/s rd, 8.4 MiB/s wr, 375 op/s Nov 23 04:59:15 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:15.349 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:15Z, description=, device_id=9598f82f-5487-414c-b61d-d64ce4fc0187, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b9b8366e-e0d4-4eba-b5fa-36b50b9a9460, ip_allocation=immediate, mac_address=fa:16:3e:1d:cb:56, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:59:09Z, description=, dns_domain=, id=79425885-b636-4c43-b7ab-b1f8779b709d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesTestJSON-1785165777-network, port_security_enabled=True, project_id=c1b218a298814f81811d06e4ddeeca2f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=22254, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=866, status=ACTIVE, subnets=['712232d6-2c27-4914-82e6-bad44ac4480f'], tags=[], tenant_id=c1b218a298814f81811d06e4ddeeca2f, updated_at=2025-11-23T09:59:11Z, vlan_transparent=None, network_id=79425885-b636-4c43-b7ab-b1f8779b709d, port_security_enabled=False, project_id=c1b218a298814f81811d06e4ddeeca2f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=883, status=DOWN, tags=[], tenant_id=c1b218a298814f81811d06e4ddeeca2f, updated_at=2025-11-23T09:59:15Z on network 79425885-b636-4c43-b7ab-b1f8779b709d#033[00m Nov 23 04:59:15 localhost systemd[1]: tmp-crun.b2j70i.mount: Deactivated successfully. Nov 23 04:59:15 localhost dnsmasq[312257]: read /var/lib/neutron/dhcp/79425885-b636-4c43-b7ab-b1f8779b709d/addn_hosts - 1 addresses Nov 23 04:59:15 localhost dnsmasq-dhcp[312257]: read /var/lib/neutron/dhcp/79425885-b636-4c43-b7ab-b1f8779b709d/host Nov 23 04:59:15 localhost dnsmasq-dhcp[312257]: read /var/lib/neutron/dhcp/79425885-b636-4c43-b7ab-b1f8779b709d/opts Nov 23 04:59:15 localhost podman[312312]: 2025-11-23 09:59:15.592770316 +0000 UTC m=+0.081938557 container kill 88a8d502fbe46cafe3c5b5ea1c9454477a48bb39b41aced4e656e82156e54a22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79425885-b636-4c43-b7ab-b1f8779b709d, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 23 04:59:15 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:59:15 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1667305845' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:59:15 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:15.874 262301 INFO neutron.agent.dhcp.agent [None req-2625de0a-90bd-413a-b59f-446651a8be45 - - - - - -] DHCP configuration for ports {'b9b8366e-e0d4-4eba-b5fa-36b50b9a9460'} is completed#033[00m Nov 23 04:59:15 localhost nova_compute[280939]: 2025-11-23 09:59:15.891 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:59:16 localhost nova_compute[280939]: 2025-11-23 09:59:16.082 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:59:16 localhost nova_compute[280939]: 2025-11-23 09:59:16.084 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11611MB free_disk=41.7004280090332GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 04:59:16 localhost nova_compute[280939]: 2025-11-23 09:59:16.084 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:59:16 localhost nova_compute[280939]: 2025-11-23 09:59:16.084 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:59:16 localhost nova_compute[280939]: 2025-11-23 09:59:16.160 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 04:59:16 localhost nova_compute[280939]: 2025-11-23 09:59:16.161 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 04:59:16 localhost nova_compute[280939]: 2025-11-23 09:59:16.179 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:59:16 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:59:16 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1793000182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:59:16 localhost nova_compute[280939]: 2025-11-23 09:59:16.633 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:59:16 localhost nova_compute[280939]: 2025-11-23 09:59:16.638 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:59:16 localhost nova_compute[280939]: 2025-11-23 09:59:16.656 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:59:16 localhost nova_compute[280939]: 2025-11-23 09:59:16.679 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 04:59:16 localhost nova_compute[280939]: 2025-11-23 09:59:16.679 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:59:16 localhost nova_compute[280939]: 2025-11-23 09:59:16.704 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:17 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:17.035 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:15Z, description=, device_id=9598f82f-5487-414c-b61d-d64ce4fc0187, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b9b8366e-e0d4-4eba-b5fa-36b50b9a9460, ip_allocation=immediate, mac_address=fa:16:3e:1d:cb:56, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:59:09Z, description=, dns_domain=, id=79425885-b636-4c43-b7ab-b1f8779b709d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesTestJSON-1785165777-network, port_security_enabled=True, project_id=c1b218a298814f81811d06e4ddeeca2f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=22254, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=866, status=ACTIVE, subnets=['712232d6-2c27-4914-82e6-bad44ac4480f'], tags=[], tenant_id=c1b218a298814f81811d06e4ddeeca2f, updated_at=2025-11-23T09:59:11Z, vlan_transparent=None, network_id=79425885-b636-4c43-b7ab-b1f8779b709d, port_security_enabled=False, project_id=c1b218a298814f81811d06e4ddeeca2f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=883, status=DOWN, tags=[], tenant_id=c1b218a298814f81811d06e4ddeeca2f, updated_at=2025-11-23T09:59:15Z on network 79425885-b636-4c43-b7ab-b1f8779b709d#033[00m Nov 23 04:59:17 localhost podman[239764]: time="2025-11-23T09:59:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:59:17 localhost podman[239764]: @ - - [23/Nov/2025:09:59:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156323 "" "Go-http-client/1.1" Nov 23 04:59:17 localhost podman[239764]: @ - - [23/Nov/2025:09:59:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19202 "" "Go-http-client/1.1" Nov 23 04:59:17 localhost podman[312377]: 2025-11-23 09:59:17.288306563 +0000 UTC m=+0.063380078 container kill 88a8d502fbe46cafe3c5b5ea1c9454477a48bb39b41aced4e656e82156e54a22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79425885-b636-4c43-b7ab-b1f8779b709d, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118) Nov 23 04:59:17 localhost dnsmasq[312257]: read /var/lib/neutron/dhcp/79425885-b636-4c43-b7ab-b1f8779b709d/addn_hosts - 1 addresses Nov 23 04:59:17 localhost dnsmasq-dhcp[312257]: read /var/lib/neutron/dhcp/79425885-b636-4c43-b7ab-b1f8779b709d/host Nov 23 04:59:17 localhost dnsmasq-dhcp[312257]: read /var/lib/neutron/dhcp/79425885-b636-4c43-b7ab-b1f8779b709d/opts Nov 23 04:59:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v133: 177 pgs: 177 active+clean; 304 MiB data, 1020 MiB used, 41 GiB / 42 GiB avail; 7.3 MiB/s rd, 7.1 MiB/s wr, 319 op/s Nov 23 04:59:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:59:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e100 do_prune osdmap full prune enabled Nov 23 04:59:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e101 e101: 6 total, 6 up, 6 in Nov 23 04:59:17 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e101: 6 total, 6 up, 6 in Nov 23 04:59:17 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:17.656 262301 INFO neutron.agent.dhcp.agent [None req-1275c817-1a99-4573-9cc6-9ef59b9e2156 - - - - - -] DHCP configuration for ports {'b9b8366e-e0d4-4eba-b5fa-36b50b9a9460'} is completed#033[00m Nov 23 04:59:18 localhost nova_compute[280939]: 2025-11-23 09:59:18.674 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 04:59:18 localhost ovn_controller[153771]: 2025-11-23T09:59:18Z|00117|ovn_bfd|INFO|Enabled BFD on interface ovn-c237ed-0 Nov 23 04:59:18 localhost ovn_controller[153771]: 2025-11-23T09:59:18Z|00118|ovn_bfd|INFO|Enabled BFD on interface ovn-49b8a0-0 Nov 23 04:59:18 localhost ovn_controller[153771]: 2025-11-23T09:59:18Z|00119|ovn_bfd|INFO|Enabled BFD on interface ovn-b7d5b3-0 Nov 23 04:59:18 localhost nova_compute[280939]: 2025-11-23 09:59:18.775 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:18 localhost nova_compute[280939]: 2025-11-23 09:59:18.791 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:18 localhost nova_compute[280939]: 2025-11-23 09:59:18.797 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:18 localhost nova_compute[280939]: 2025-11-23 09:59:18.828 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:18 localhost nova_compute[280939]: 2025-11-23 09:59:18.839 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:18 localhost nova_compute[280939]: 2025-11-23 09:59:18.845 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v135: 177 pgs: 177 active+clean; 225 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 10 MiB/s rd, 6.8 MiB/s wr, 457 op/s Nov 23 04:59:19 localhost nova_compute[280939]: 2025-11-23 09:59:19.708 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:19 localhost nova_compute[280939]: 2025-11-23 09:59:19.777 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:19 localhost nova_compute[280939]: 2025-11-23 09:59:19.796 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:20 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:20.225 2 INFO neutron.agent.securitygroups_rpc [req-e10db1c6-11f3-4ff7-8a20-47058bab960f req-be246029-4620-443a-8a27-dc66d74bf8a5 924b04d509744aa59ab90825723761be c1b218a298814f81811d06e4ddeeca2f - - default default] Security group rule updated ['df6d8f7b-74cc-4864-a7e2-24c32662f7e1']#033[00m Nov 23 04:59:20 localhost nova_compute[280939]: 2025-11-23 09:59:20.539 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:20 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:20.719 2 INFO neutron.agent.securitygroups_rpc [req-b63466c9-444c-4747-806e-6e70f6ca8dbf req-1b97f9eb-afe9-470d-a373-457d04103769 924b04d509744aa59ab90825723761be c1b218a298814f81811d06e4ddeeca2f - - default default] Security group rule updated ['486481c0-58d7-474c-ac28-9109e6d75e3e']#033[00m Nov 23 04:59:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v136: 177 pgs: 177 active+clean; 225 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 8.9 MiB/s rd, 5.8 MiB/s wr, 392 op/s Nov 23 04:59:21 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:21.664 2 INFO neutron.agent.securitygroups_rpc [req-5b5e1aae-0130-44ef-b3c8-2b4a33b1f155 req-73054398-e6b7-4548-a319-a659c6c54985 924b04d509744aa59ab90825723761be c1b218a298814f81811d06e4ddeeca2f - - default default] Security group rule updated ['9f0e447c-560b-475e-bb8e-29f8dd459211']#033[00m Nov 23 04:59:21 localhost nova_compute[280939]: 2025-11-23 09:59:21.706 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:59:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:59:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:59:21 localhost podman[312401]: 2025-11-23 09:59:21.896577165 +0000 UTC m=+0.082083432 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 04:59:21 localhost podman[312401]: 2025-11-23 09:59:21.937273735 +0000 UTC m=+0.122780002 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:59:21 localhost systemd[1]: tmp-crun.nVOgqI.mount: Deactivated successfully. Nov 23 04:59:21 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:59:21 localhost podman[312403]: 2025-11-23 09:59:21.960256671 +0000 UTC m=+0.138828106 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 23 04:59:22 localhost podman[312402]: 2025-11-23 09:59:22.005096998 +0000 UTC m=+0.186294853 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 04:59:22 localhost podman[312402]: 2025-11-23 09:59:22.015797417 +0000 UTC m=+0.196995252 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 04:59:22 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:59:22 localhost podman[312403]: 2025-11-23 09:59:22.069142876 +0000 UTC m=+0.247714291 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible) Nov 23 04:59:22 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:59:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:59:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e101 do_prune osdmap full prune enabled Nov 23 04:59:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e102 e102: 6 total, 6 up, 6 in Nov 23 04:59:22 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e102: 6 total, 6 up, 6 in Nov 23 04:59:22 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:22.692 2 INFO neutron.agent.securitygroups_rpc [req-2f39ed52-ad77-4aa6-9471-651d11ecbf13 req-f42d6273-96eb-4a84-b6a8-20685191fd4a 924b04d509744aa59ab90825723761be c1b218a298814f81811d06e4ddeeca2f - - default default] Security group rule updated ['9d3d4eb8-5be7-4867-b930-e62b16d22d58']#033[00m Nov 23 04:59:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_09:59:23 Nov 23 04:59:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 04:59:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 04:59:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['volumes', 'manila_metadata', 'backups', 'manila_data', 'vms', 'images', '.mgr'] Nov 23 04:59:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 04:59:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 177 active+clean; 225 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 1.7 KiB/s wr, 130 op/s Nov 23 04:59:23 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:23.353 262301 INFO neutron.agent.linux.ip_lib [None req-ab6988d3-9176-417f-bb59-7e805e011e80 - - - - - -] Device tapdc05ece0-3e cannot be used as it has no MAC address#033[00m Nov 23 04:59:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:59:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:59:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:59:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:59:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:59:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:59:23 localhost nova_compute[280939]: 2025-11-23 09:59:23.376 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 04:59:23 localhost kernel: device tapdc05ece0-3e entered promiscuous mode Nov 23 04:59:23 localhost NetworkManager[5966]: [1763891963.3840] manager: (tapdc05ece0-3e): new Generic device (/org/freedesktop/NetworkManager/Devices/26) Nov 23 04:59:23 localhost nova_compute[280939]: 2025-11-23 09:59:23.384 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:23 localhost ovn_controller[153771]: 2025-11-23T09:59:23Z|00120|binding|INFO|Claiming lport dc05ece0-3ed1-4052-ae32-94aa08bbcb4f for this chassis. Nov 23 04:59:23 localhost ovn_controller[153771]: 2025-11-23T09:59:23Z|00121|binding|INFO|dc05ece0-3ed1-4052-ae32-94aa08bbcb4f: Claiming unknown Nov 23 04:59:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:59:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 04:59:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:59:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0065783016645728646 of space, bias 1.0, pg target 1.3156603329145728 quantized to 32 (current 32) Nov 23 04:59:23 localhost systemd-udevd[312480]: Network interface NamePolicy= disabled on kernel command line. Nov 23 04:59:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:59:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 04:59:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:59:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8555772569444443 quantized to 32 (current 32) Nov 23 04:59:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:59:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 04:59:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:59:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 04:59:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 04:59:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019465818676716918 quantized to 16 (current 16) Nov 23 04:59:23 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:23.398 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-96508c68-ee4e-447a-b0f1-88c349fb74a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96508c68-ee4e-447a-b0f1-88c349fb74a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '249e3ad2b54244c0a15cd0a2ab31d9ab', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c6d351b-9860-4db6-88bb-aef58f7ea6a9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=dc05ece0-3ed1-4052-ae32-94aa08bbcb4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:59:23 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:23.400 159415 INFO neutron.agent.ovn.metadata.agent [-] Port dc05ece0-3ed1-4052-ae32-94aa08bbcb4f in datapath 96508c68-ee4e-447a-b0f1-88c349fb74a4 bound to our chassis#033[00m Nov 23 04:59:23 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:23.403 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 96508c68-ee4e-447a-b0f1-88c349fb74a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 04:59:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 04:59:23 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:23.405 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[ad35a03e-b46c-4b88-b0fd-77bedcf9aa93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:59:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 04:59:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 04:59:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 04:59:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 04:59:23 localhost journal[229336]: ethtool ioctl error on tapdc05ece0-3e: No such device Nov 23 04:59:23 localhost ovn_controller[153771]: 2025-11-23T09:59:23Z|00122|binding|INFO|Setting lport dc05ece0-3ed1-4052-ae32-94aa08bbcb4f ovn-installed in OVS Nov 23 04:59:23 localhost ovn_controller[153771]: 2025-11-23T09:59:23Z|00123|binding|INFO|Setting lport dc05ece0-3ed1-4052-ae32-94aa08bbcb4f up in Southbound Nov 23 04:59:23 localhost journal[229336]: ethtool ioctl error on tapdc05ece0-3e: No such device Nov 23 04:59:23 localhost nova_compute[280939]: 2025-11-23 09:59:23.419 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:23 localhost journal[229336]: ethtool ioctl error on tapdc05ece0-3e: No such device Nov 23 04:59:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 04:59:23 localhost journal[229336]: ethtool ioctl error on tapdc05ece0-3e: No such device Nov 23 04:59:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 04:59:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 04:59:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 04:59:23 localhost journal[229336]: ethtool ioctl error on tapdc05ece0-3e: No such device Nov 23 04:59:23 localhost journal[229336]: ethtool ioctl error on tapdc05ece0-3e: No such device Nov 23 04:59:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 04:59:23 localhost journal[229336]: ethtool ioctl error on tapdc05ece0-3e: No such device Nov 23 04:59:23 localhost journal[229336]: ethtool ioctl error on tapdc05ece0-3e: No such device Nov 23 04:59:23 localhost nova_compute[280939]: 2025-11-23 09:59:23.459 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:23 localhost nova_compute[280939]: 2025-11-23 09:59:23.482 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:23 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:23.526 2 INFO neutron.agent.securitygroups_rpc [req-9fd26688-f794-4469-9fd8-a5b40d60592d req-5ebc8662-650f-469d-8c45-5ce5c30495b8 924b04d509744aa59ab90825723761be c1b218a298814f81811d06e4ddeeca2f - - default default] Security group rule updated ['b40df903-b9f3-4a1c-8419-71383dae71f9']#033[00m Nov 23 04:59:23 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:23.991 2 INFO neutron.agent.securitygroups_rpc [req-e3ac8e09-5876-4e41-80c7-46043b4c6329 req-ca05e120-41d0-4e85-be51-0d5858a51936 924b04d509744aa59ab90825723761be c1b218a298814f81811d06e4ddeeca2f - - default default] Security group rule updated ['b40df903-b9f3-4a1c-8419-71383dae71f9']#033[00m Nov 23 04:59:24 localhost podman[312552]: Nov 23 04:59:24 localhost podman[312552]: 2025-11-23 09:59:24.323060044 +0000 UTC m=+0.089401197 container create 3e8be4e8273699429ca2f535f05a1f62f138ecb6a5321edf5c68934a6de16813 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96508c68-ee4e-447a-b0f1-88c349fb74a4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:59:24 localhost systemd[1]: Started libpod-conmon-3e8be4e8273699429ca2f535f05a1f62f138ecb6a5321edf5c68934a6de16813.scope. Nov 23 04:59:24 localhost podman[312552]: 2025-11-23 09:59:24.279856518 +0000 UTC m=+0.046197691 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 04:59:24 localhost systemd[1]: tmp-crun.UuvcPB.mount: Deactivated successfully. Nov 23 04:59:24 localhost systemd[1]: Started libcrun container. Nov 23 04:59:24 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:24.400 2 INFO neutron.agent.securitygroups_rpc [None req-55c9cfb2-f59a-42d6-ace6-61788e22f102 f30cb7ce3bac485ca16e284ef2514162 493833d8fb394637b29c3fb2052aca9c - - default default] Security group member updated ['6a5ca8fc-febe-492b-8ed6-1c2faceb11b7']#033[00m Nov 23 04:59:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/695df4bdc74354e4adda3bd672d73a76f797c2f46d48f2a7b3b826856635bc90/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 04:59:24 localhost podman[312552]: 2025-11-23 09:59:24.414669388 +0000 UTC m=+0.181010531 container init 3e8be4e8273699429ca2f535f05a1f62f138ecb6a5321edf5c68934a6de16813 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96508c68-ee4e-447a-b0f1-88c349fb74a4, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118) Nov 23 04:59:24 localhost podman[312552]: 2025-11-23 09:59:24.424283273 +0000 UTC m=+0.190624426 container start 3e8be4e8273699429ca2f535f05a1f62f138ecb6a5321edf5c68934a6de16813 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96508c68-ee4e-447a-b0f1-88c349fb74a4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:59:24 localhost dnsmasq[312570]: started, version 2.85 cachesize 150 Nov 23 04:59:24 localhost dnsmasq[312570]: DNS service limited to local subnets Nov 23 04:59:24 localhost dnsmasq[312570]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 04:59:24 localhost dnsmasq[312570]: warning: no upstream servers configured Nov 23 04:59:24 localhost dnsmasq-dhcp[312570]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 04:59:24 localhost dnsmasq[312570]: read /var/lib/neutron/dhcp/96508c68-ee4e-447a-b0f1-88c349fb74a4/addn_hosts - 0 addresses Nov 23 04:59:24 localhost dnsmasq-dhcp[312570]: read /var/lib/neutron/dhcp/96508c68-ee4e-447a-b0f1-88c349fb74a4/host Nov 23 04:59:24 localhost dnsmasq-dhcp[312570]: read /var/lib/neutron/dhcp/96508c68-ee4e-447a-b0f1-88c349fb74a4/opts Nov 23 04:59:24 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:24.577 262301 INFO neutron.agent.dhcp.agent [None req-57739fdd-f0f5-42a7-b845-87d986fc270f - - - - - -] DHCP configuration for ports {'6127eedd-0e92-4a1b-a951-ffa66b86d168'} is completed#033[00m Nov 23 04:59:24 localhost nova_compute[280939]: 2025-11-23 09:59:24.773 280943 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 23 04:59:24 localhost nova_compute[280939]: 2025-11-23 09:59:24.774 280943 INFO nova.compute.manager [-] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] VM Stopped (Lifecycle Event)#033[00m Nov 23 04:59:24 localhost nova_compute[280939]: 2025-11-23 09:59:24.795 280943 DEBUG nova.compute.manager [None req-dca2e92a-1b1a-4a2d-9f55-79f9538f8359 - - - - - -] [instance: 1148b5a9-4da9-491f-8952-80c4a965fe6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:59:24 localhost nova_compute[280939]: 2025-11-23 09:59:24.798 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:24 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:24.901 2 INFO neutron.agent.securitygroups_rpc [None req-dfbbbdd6-9764-4926-ad6f-603dbba55323 f30cb7ce3bac485ca16e284ef2514162 493833d8fb394637b29c3fb2052aca9c - - default default] Security group member updated ['6a5ca8fc-febe-492b-8ed6-1c2faceb11b7']#033[00m Nov 23 04:59:24 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:24.924 2 INFO neutron.agent.securitygroups_rpc [req-3634fbe3-812b-4789-a7d3-12a0d9366017 req-05979dae-61e1-4fc8-b138-901b668995d3 924b04d509744aa59ab90825723761be c1b218a298814f81811d06e4ddeeca2f - - default default] Security group rule updated ['b40df903-b9f3-4a1c-8419-71383dae71f9']#033[00m Nov 23 04:59:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 177 active+clean; 225 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 1.7 KiB/s wr, 130 op/s Nov 23 04:59:26 localhost nova_compute[280939]: 2025-11-23 09:59:26.708 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 177 active+clean; 225 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 1.4 KiB/s wr, 105 op/s Nov 23 04:59:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:59:28 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:28.155 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:27Z, description=, device_id=e4e2758a-424c-4f93-94ee-a28fa7a5aa9f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3795476d-cf7d-4830-bda0-fa5c756720a6, ip_allocation=immediate, mac_address=fa:16:3e:2a:9c:d5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:59:21Z, description=, dns_domain=, id=96508c68-ee4e-447a-b0f1-88c349fb74a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroup264TestJSON-504945581-network, port_security_enabled=True, project_id=249e3ad2b54244c0a15cd0a2ab31d9ab, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60259, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=961, status=ACTIVE, subnets=['1b079c34-cd09-406c-8011-e23955d8cd40'], tags=[], tenant_id=249e3ad2b54244c0a15cd0a2ab31d9ab, updated_at=2025-11-23T09:59:22Z, vlan_transparent=None, network_id=96508c68-ee4e-447a-b0f1-88c349fb74a4, port_security_enabled=False, project_id=249e3ad2b54244c0a15cd0a2ab31d9ab, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1005, status=DOWN, tags=[], tenant_id=249e3ad2b54244c0a15cd0a2ab31d9ab, updated_at=2025-11-23T09:59:27Z on network 96508c68-ee4e-447a-b0f1-88c349fb74a4#033[00m Nov 23 04:59:28 localhost dnsmasq[312570]: read /var/lib/neutron/dhcp/96508c68-ee4e-447a-b0f1-88c349fb74a4/addn_hosts - 1 addresses Nov 23 04:59:28 localhost dnsmasq-dhcp[312570]: read /var/lib/neutron/dhcp/96508c68-ee4e-447a-b0f1-88c349fb74a4/host Nov 23 04:59:28 localhost dnsmasq-dhcp[312570]: read /var/lib/neutron/dhcp/96508c68-ee4e-447a-b0f1-88c349fb74a4/opts Nov 23 04:59:28 localhost podman[312589]: 2025-11-23 09:59:28.360847713 +0000 UTC m=+0.056814217 container kill 3e8be4e8273699429ca2f535f05a1f62f138ecb6a5321edf5c68934a6de16813 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96508c68-ee4e-447a-b0f1-88c349fb74a4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 04:59:28 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:28.554 262301 INFO neutron.agent.dhcp.agent [None req-8c413e0a-497e-48c2-85ae-bcad98b1631d - - - - - -] DHCP configuration for ports {'3795476d-cf7d-4830-bda0-fa5c756720a6'} is completed#033[00m Nov 23 04:59:29 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:29.306 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:27Z, description=, device_id=e4e2758a-424c-4f93-94ee-a28fa7a5aa9f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3795476d-cf7d-4830-bda0-fa5c756720a6, ip_allocation=immediate, mac_address=fa:16:3e:2a:9c:d5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:59:21Z, description=, dns_domain=, id=96508c68-ee4e-447a-b0f1-88c349fb74a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroup264TestJSON-504945581-network, port_security_enabled=True, project_id=249e3ad2b54244c0a15cd0a2ab31d9ab, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60259, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=961, status=ACTIVE, subnets=['1b079c34-cd09-406c-8011-e23955d8cd40'], tags=[], tenant_id=249e3ad2b54244c0a15cd0a2ab31d9ab, updated_at=2025-11-23T09:59:22Z, vlan_transparent=None, network_id=96508c68-ee4e-447a-b0f1-88c349fb74a4, port_security_enabled=False, project_id=249e3ad2b54244c0a15cd0a2ab31d9ab, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1005, status=DOWN, tags=[], tenant_id=249e3ad2b54244c0a15cd0a2ab31d9ab, updated_at=2025-11-23T09:59:27Z on network 96508c68-ee4e-447a-b0f1-88c349fb74a4#033[00m Nov 23 04:59:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 226 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 636 KiB/s rd, 44 KiB/s wr, 55 op/s Nov 23 04:59:29 localhost dnsmasq[312570]: read /var/lib/neutron/dhcp/96508c68-ee4e-447a-b0f1-88c349fb74a4/addn_hosts - 1 addresses Nov 23 04:59:29 localhost dnsmasq-dhcp[312570]: read /var/lib/neutron/dhcp/96508c68-ee4e-447a-b0f1-88c349fb74a4/host Nov 23 04:59:29 localhost dnsmasq-dhcp[312570]: read /var/lib/neutron/dhcp/96508c68-ee4e-447a-b0f1-88c349fb74a4/opts Nov 23 04:59:29 localhost podman[312626]: 2025-11-23 09:59:29.525169954 +0000 UTC m=+0.056745434 container kill 3e8be4e8273699429ca2f535f05a1f62f138ecb6a5321edf5c68934a6de16813 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96508c68-ee4e-447a-b0f1-88c349fb74a4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 04:59:29 localhost nova_compute[280939]: 2025-11-23 09:59:29.805 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:29 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:29.828 262301 INFO neutron.agent.dhcp.agent [None req-da11b81f-6593-4dbb-a0f7-5ca93dc364e0 - - - - - -] DHCP configuration for ports {'3795476d-cf7d-4830-bda0-fa5c756720a6'} is completed#033[00m Nov 23 04:59:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 04:59:30 localhost podman[312648]: 2025-11-23 09:59:30.310876688 +0000 UTC m=+0.077382928 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, version=9.6, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vendor=Red Hat, Inc.) Nov 23 04:59:30 localhost podman[312648]: 2025-11-23 09:59:30.350393081 +0000 UTC m=+0.116899261 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 04:59:30 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 04:59:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e102 do_prune osdmap full prune enabled Nov 23 04:59:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e103 e103: 6 total, 6 up, 6 in Nov 23 04:59:30 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e103: 6 total, 6 up, 6 in Nov 23 04:59:30 localhost ovn_controller[153771]: 2025-11-23T09:59:30Z|00124|ovn_bfd|INFO|Disabled BFD on interface ovn-c237ed-0 Nov 23 04:59:30 localhost ovn_controller[153771]: 2025-11-23T09:59:30Z|00125|ovn_bfd|INFO|Disabled BFD on interface ovn-49b8a0-0 Nov 23 04:59:30 localhost ovn_controller[153771]: 2025-11-23T09:59:30Z|00126|ovn_bfd|INFO|Disabled BFD on interface ovn-b7d5b3-0 Nov 23 04:59:30 localhost nova_compute[280939]: 2025-11-23 09:59:30.562 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:30 localhost nova_compute[280939]: 2025-11-23 09:59:30.565 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:30 localhost nova_compute[280939]: 2025-11-23 09:59:30.586 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:30 localhost systemd[1]: tmp-crun.WmDGzb.mount: Deactivated successfully. Nov 23 04:59:30 localhost dnsmasq[312257]: read /var/lib/neutron/dhcp/79425885-b636-4c43-b7ab-b1f8779b709d/addn_hosts - 0 addresses Nov 23 04:59:30 localhost dnsmasq-dhcp[312257]: read /var/lib/neutron/dhcp/79425885-b636-4c43-b7ab-b1f8779b709d/host Nov 23 04:59:30 localhost dnsmasq-dhcp[312257]: read /var/lib/neutron/dhcp/79425885-b636-4c43-b7ab-b1f8779b709d/opts Nov 23 04:59:30 localhost podman[312683]: 2025-11-23 09:59:30.629902997 +0000 UTC m=+0.083934079 container kill 88a8d502fbe46cafe3c5b5ea1c9454477a48bb39b41aced4e656e82156e54a22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79425885-b636-4c43-b7ab-b1f8779b709d, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:59:30 localhost ovn_controller[153771]: 2025-11-23T09:59:30Z|00127|ovn_bfd|INFO|Enabled BFD on interface ovn-c237ed-0 Nov 23 04:59:30 localhost ovn_controller[153771]: 2025-11-23T09:59:30Z|00128|ovn_bfd|INFO|Enabled BFD on interface ovn-49b8a0-0 Nov 23 04:59:30 localhost ovn_controller[153771]: 2025-11-23T09:59:30Z|00129|ovn_bfd|INFO|Enabled BFD on interface ovn-b7d5b3-0 Nov 23 04:59:30 localhost nova_compute[280939]: 2025-11-23 09:59:30.782 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:30 localhost nova_compute[280939]: 2025-11-23 09:59:30.800 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:30 localhost kernel: device tapb8c03e45-40 left promiscuous mode Nov 23 04:59:30 localhost nova_compute[280939]: 2025-11-23 09:59:30.805 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:30 localhost ovn_controller[153771]: 2025-11-23T09:59:30Z|00130|binding|INFO|Releasing lport b8c03e45-4095-4592-9111-02ed44b19cad from this chassis (sb_readonly=0) Nov 23 04:59:30 localhost ovn_controller[153771]: 2025-11-23T09:59:30Z|00131|binding|INFO|Setting lport b8c03e45-4095-4592-9111-02ed44b19cad down in Southbound Nov 23 04:59:30 localhost nova_compute[280939]: 2025-11-23 09:59:30.810 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:30.825 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-79425885-b636-4c43-b7ab-b1f8779b709d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79425885-b636-4c43-b7ab-b1f8779b709d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1b218a298814f81811d06e4ddeeca2f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e61ca71-2ede-46ab-977e-dbd69bc63a1f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b8c03e45-4095-4592-9111-02ed44b19cad) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:59:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:30.827 159415 INFO neutron.agent.ovn.metadata.agent [-] Port b8c03e45-4095-4592-9111-02ed44b19cad in datapath 79425885-b636-4c43-b7ab-b1f8779b709d unbound from our chassis#033[00m Nov 23 04:59:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:30.830 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79425885-b636-4c43-b7ab-b1f8779b709d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 04:59:30 localhost nova_compute[280939]: 2025-11-23 09:59:30.832 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:30 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:30.832 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[af247616-c561-42bd-a301-2c040556abef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:59:30 localhost nova_compute[280939]: 2025-11-23 09:59:30.837 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:30 localhost nova_compute[280939]: 2025-11-23 09:59:30.906 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:30 localhost nova_compute[280939]: 2025-11-23 09:59:30.908 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 226 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 719 KiB/s rd, 50 KiB/s wr, 62 op/s Nov 23 04:59:31 localhost nova_compute[280939]: 2025-11-23 09:59:31.710 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:32 localhost nova_compute[280939]: 2025-11-23 09:59:32.444 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:59:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e103 do_prune osdmap full prune enabled Nov 23 04:59:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e104 e104: 6 total, 6 up, 6 in Nov 23 04:59:32 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e104: 6 total, 6 up, 6 in Nov 23 04:59:32 localhost nova_compute[280939]: 2025-11-23 09:59:32.606 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:32 localhost nova_compute[280939]: 2025-11-23 09:59:32.627 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:32 localhost ovn_controller[153771]: 2025-11-23T09:59:32Z|00132|ovn_bfd|INFO|Disabled BFD on interface ovn-c237ed-0 Nov 23 04:59:32 localhost ovn_controller[153771]: 2025-11-23T09:59:32Z|00133|ovn_bfd|INFO|Disabled BFD on interface ovn-49b8a0-0 Nov 23 04:59:32 localhost ovn_controller[153771]: 2025-11-23T09:59:32Z|00134|ovn_bfd|INFO|Disabled BFD on interface ovn-b7d5b3-0 Nov 23 04:59:32 localhost nova_compute[280939]: 2025-11-23 09:59:32.647 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:32 localhost nova_compute[280939]: 2025-11-23 09:59:32.650 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:32 localhost nova_compute[280939]: 2025-11-23 09:59:32.670 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:32 localhost dnsmasq[312570]: read /var/lib/neutron/dhcp/96508c68-ee4e-447a-b0f1-88c349fb74a4/addn_hosts - 0 addresses Nov 23 04:59:32 localhost dnsmasq-dhcp[312570]: read /var/lib/neutron/dhcp/96508c68-ee4e-447a-b0f1-88c349fb74a4/host Nov 23 04:59:32 localhost dnsmasq-dhcp[312570]: read /var/lib/neutron/dhcp/96508c68-ee4e-447a-b0f1-88c349fb74a4/opts Nov 23 04:59:32 localhost podman[312722]: 2025-11-23 09:59:32.762129738 +0000 UTC m=+0.066650758 container kill 3e8be4e8273699429ca2f535f05a1f62f138ecb6a5321edf5c68934a6de16813 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96508c68-ee4e-447a-b0f1-88c349fb74a4, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2) Nov 23 04:59:32 localhost nova_compute[280939]: 2025-11-23 09:59:32.917 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:32 localhost ovn_controller[153771]: 2025-11-23T09:59:32Z|00135|binding|INFO|Releasing lport dc05ece0-3ed1-4052-ae32-94aa08bbcb4f from this chassis (sb_readonly=0) Nov 23 04:59:32 localhost kernel: device tapdc05ece0-3e left promiscuous mode Nov 23 04:59:32 localhost ovn_controller[153771]: 2025-11-23T09:59:32Z|00136|binding|INFO|Setting lport dc05ece0-3ed1-4052-ae32-94aa08bbcb4f down in Southbound Nov 23 04:59:32 localhost nova_compute[280939]: 2025-11-23 09:59:32.938 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:32 localhost nova_compute[280939]: 2025-11-23 09:59:32.940 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:33.145 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-96508c68-ee4e-447a-b0f1-88c349fb74a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96508c68-ee4e-447a-b0f1-88c349fb74a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '249e3ad2b54244c0a15cd0a2ab31d9ab', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c6d351b-9860-4db6-88bb-aef58f7ea6a9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=dc05ece0-3ed1-4052-ae32-94aa08bbcb4f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:59:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:33.147 159415 INFO neutron.agent.ovn.metadata.agent [-] Port dc05ece0-3ed1-4052-ae32-94aa08bbcb4f in datapath 96508c68-ee4e-447a-b0f1-88c349fb74a4 unbound from our chassis#033[00m Nov 23 04:59:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:33.149 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 96508c68-ee4e-447a-b0f1-88c349fb74a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 04:59:33 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:33.150 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[6df2f86b-4397-447a-876b-6f497ce10299]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:59:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 226 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 796 KiB/s rd, 55 KiB/s wr, 68 op/s Nov 23 04:59:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e104 do_prune osdmap full prune enabled Nov 23 04:59:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e105 e105: 6 total, 6 up, 6 in Nov 23 04:59:33 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e105: 6 total, 6 up, 6 in Nov 23 04:59:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 04:59:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 04:59:34 localhost podman[312762]: 2025-11-23 09:59:34.63718627 +0000 UTC m=+0.093867284 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3) Nov 23 04:59:34 localhost podman[312762]: 2025-11-23 09:59:34.6505193 +0000 UTC m=+0.107200414 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 23 04:59:34 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 04:59:34 localhost systemd[1]: tmp-crun.s3AZJO.mount: Deactivated successfully. Nov 23 04:59:34 localhost podman[312764]: 2025-11-23 09:59:34.744774945 +0000 UTC m=+0.197462116 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 04:59:34 localhost podman[312764]: 2025-11-23 09:59:34.761583601 +0000 UTC m=+0.214270712 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 04:59:34 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 04:59:34 localhost nova_compute[280939]: 2025-11-23 09:59:34.850 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:35 localhost dnsmasq[312257]: exiting on receipt of SIGTERM Nov 23 04:59:35 localhost podman[312853]: 2025-11-23 09:59:35.194033424 +0000 UTC m=+0.065670288 container kill 88a8d502fbe46cafe3c5b5ea1c9454477a48bb39b41aced4e656e82156e54a22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79425885-b636-4c43-b7ab-b1f8779b709d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:59:35 localhost systemd[1]: libpod-88a8d502fbe46cafe3c5b5ea1c9454477a48bb39b41aced4e656e82156e54a22.scope: Deactivated successfully. Nov 23 04:59:35 localhost podman[312869]: 2025-11-23 09:59:35.277050504 +0000 UTC m=+0.070000721 container died 88a8d502fbe46cafe3c5b5ea1c9454477a48bb39b41aced4e656e82156e54a22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79425885-b636-4c43-b7ab-b1f8779b709d, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 04:59:35 localhost podman[312869]: 2025-11-23 09:59:35.307439777 +0000 UTC m=+0.100389964 container cleanup 88a8d502fbe46cafe3c5b5ea1c9454477a48bb39b41aced4e656e82156e54a22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79425885-b636-4c43-b7ab-b1f8779b709d, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2) Nov 23 04:59:35 localhost systemd[1]: libpod-conmon-88a8d502fbe46cafe3c5b5ea1c9454477a48bb39b41aced4e656e82156e54a22.scope: Deactivated successfully. Nov 23 04:59:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v147: 177 pgs: 177 active+clean; 307 MiB data, 1021 MiB used, 41 GiB / 42 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 198 op/s Nov 23 04:59:35 localhost podman[312872]: 2025-11-23 09:59:35.37068187 +0000 UTC m=+0.145234222 container remove 88a8d502fbe46cafe3c5b5ea1c9454477a48bb39b41aced4e656e82156e54a22 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79425885-b636-4c43-b7ab-b1f8779b709d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 04:59:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 04:59:35 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 04:59:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 04:59:35 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:59:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 04:59:35 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:59:35 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev 445ce265-cde7-4275-9519-1d57a6af854c (Updating node-proxy deployment (+3 -> 3)) Nov 23 04:59:35 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev 445ce265-cde7-4275-9519-1d57a6af854c (Updating node-proxy deployment (+3 -> 3)) Nov 23 04:59:35 localhost ceph-mgr[286671]: [progress INFO root] Completed event 445ce265-cde7-4275-9519-1d57a6af854c (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 04:59:35 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:35.401 262301 INFO neutron.agent.dhcp.agent [None req-74182c08-2ef2-4ab4-a25f-518a505e0633 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 04:59:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 04:59:35 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 04:59:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e105 do_prune osdmap full prune enabled Nov 23 04:59:35 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 04:59:35 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:59:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e106 e106: 6 total, 6 up, 6 in Nov 23 04:59:35 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e106: 6 total, 6 up, 6 in Nov 23 04:59:35 localhost systemd[1]: var-lib-containers-storage-overlay-18ce08d112e50bd1c0db5816e3bbbd0ab22081b68a61f655319b07599cd17d46-merged.mount: Deactivated successfully. Nov 23 04:59:35 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88a8d502fbe46cafe3c5b5ea1c9454477a48bb39b41aced4e656e82156e54a22-userdata-shm.mount: Deactivated successfully. Nov 23 04:59:35 localhost systemd[1]: run-netns-qdhcp\x2d79425885\x2db636\x2d4c43\x2db7ab\x2db1f8779b709d.mount: Deactivated successfully. Nov 23 04:59:35 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:35.921 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 04:59:36 localhost nova_compute[280939]: 2025-11-23 09:59:36.714 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:36 localhost openstack_network_exporter[241732]: ERROR 09:59:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 04:59:36 localhost openstack_network_exporter[241732]: ERROR 09:59:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:59:36 localhost openstack_network_exporter[241732]: ERROR 09:59:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 04:59:36 localhost openstack_network_exporter[241732]: ERROR 09:59:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 04:59:36 localhost openstack_network_exporter[241732]: Nov 23 04:59:36 localhost openstack_network_exporter[241732]: ERROR 09:59:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 04:59:36 localhost openstack_network_exporter[241732]: Nov 23 04:59:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v149: 177 pgs: 177 active+clean; 307 MiB data, 1021 MiB used, 41 GiB / 42 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 198 op/s Nov 23 04:59:37 localhost dnsmasq[312570]: exiting on receipt of SIGTERM Nov 23 04:59:37 localhost podman[312945]: 2025-11-23 09:59:37.380087979 +0000 UTC m=+0.061045007 container kill 3e8be4e8273699429ca2f535f05a1f62f138ecb6a5321edf5c68934a6de16813 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96508c68-ee4e-447a-b0f1-88c349fb74a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 04:59:37 localhost systemd[1]: libpod-3e8be4e8273699429ca2f535f05a1f62f138ecb6a5321edf5c68934a6de16813.scope: Deactivated successfully. Nov 23 04:59:37 localhost podman[312957]: 2025-11-23 09:59:37.448536771 +0000 UTC m=+0.056141776 container died 3e8be4e8273699429ca2f535f05a1f62f138ecb6a5321edf5c68934a6de16813 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96508c68-ee4e-447a-b0f1-88c349fb74a4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 04:59:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:59:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e106 do_prune osdmap full prune enabled Nov 23 04:59:37 localhost podman[312957]: 2025-11-23 09:59:37.481301017 +0000 UTC m=+0.088905972 container cleanup 3e8be4e8273699429ca2f535f05a1f62f138ecb6a5321edf5c68934a6de16813 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96508c68-ee4e-447a-b0f1-88c349fb74a4, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:59:37 localhost systemd[1]: libpod-conmon-3e8be4e8273699429ca2f535f05a1f62f138ecb6a5321edf5c68934a6de16813.scope: Deactivated successfully. Nov 23 04:59:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e107 e107: 6 total, 6 up, 6 in Nov 23 04:59:37 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e107: 6 total, 6 up, 6 in Nov 23 04:59:37 localhost podman[312959]: 2025-11-23 09:59:37.52565486 +0000 UTC m=+0.125171516 container remove 3e8be4e8273699429ca2f535f05a1f62f138ecb6a5321edf5c68934a6de16813 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96508c68-ee4e-447a-b0f1-88c349fb74a4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:59:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:37.572 262301 INFO neutron.agent.dhcp.agent [None req-bb7d7fd7-ff17-4b2a-bcde-06001a2ca2fe - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 04:59:37 localhost nova_compute[280939]: 2025-11-23 09:59:37.658 280943 DEBUG oslo_concurrency.lockutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Acquiring lock "8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:59:37 localhost nova_compute[280939]: 2025-11-23 09:59:37.659 280943 DEBUG oslo_concurrency.lockutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Lock "8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7" acquired by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:59:37 localhost nova_compute[280939]: 2025-11-23 09:59:37.660 280943 INFO nova.compute.manager [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Unshelving#033[00m Nov 23 04:59:37 localhost nova_compute[280939]: 2025-11-23 09:59:37.760 280943 DEBUG oslo_concurrency.lockutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:59:37 localhost nova_compute[280939]: 2025-11-23 09:59:37.761 280943 DEBUG oslo_concurrency.lockutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:59:37 localhost nova_compute[280939]: 2025-11-23 09:59:37.764 280943 DEBUG nova.objects.instance [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 23 04:59:37 localhost nova_compute[280939]: 2025-11-23 09:59:37.777 280943 DEBUG nova.objects.instance [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 23 04:59:37 localhost nova_compute[280939]: 2025-11-23 09:59:37.788 280943 DEBUG nova.virt.hardware [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Nov 23 04:59:37 localhost nova_compute[280939]: 2025-11-23 09:59:37.789 280943 INFO nova.compute.claims [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Claim successful on node np0005532584.localdomain#033[00m Nov 23 04:59:37 localhost nova_compute[280939]: 2025-11-23 09:59:37.910 280943 DEBUG oslo_concurrency.processutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:59:38 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:38.233 2 INFO neutron.agent.securitygroups_rpc [None req-0fc01b46-65ca-4975-99ff-e6e4d0974af8 32512604c08f4fa48e6e985a3f6cd6d1 79509bc833494f3598e01347dc55dea9 - - default default] Security group member updated ['cfab2162-6afe-48a0-9f05-cee7f160244c']#033[00m Nov 23 04:59:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:59:38 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3217501975' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:59:38 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:38.351 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 04:59:38 localhost nova_compute[280939]: 2025-11-23 09:59:38.356 280943 DEBUG oslo_concurrency.processutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:59:38 localhost nova_compute[280939]: 2025-11-23 09:59:38.362 280943 DEBUG nova.compute.provider_tree [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:59:38 localhost systemd[1]: var-lib-containers-storage-overlay-695df4bdc74354e4adda3bd672d73a76f797c2f46d48f2a7b3b826856635bc90-merged.mount: Deactivated successfully. Nov 23 04:59:38 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e8be4e8273699429ca2f535f05a1f62f138ecb6a5321edf5c68934a6de16813-userdata-shm.mount: Deactivated successfully. Nov 23 04:59:38 localhost systemd[1]: run-netns-qdhcp\x2d96508c68\x2dee4e\x2d447a\x2db0f1\x2d88c349fb74a4.mount: Deactivated successfully. Nov 23 04:59:38 localhost nova_compute[280939]: 2025-11-23 09:59:38.385 280943 DEBUG nova.scheduler.client.report [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:59:38 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 04:59:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 04:59:38 localhost nova_compute[280939]: 2025-11-23 09:59:38.411 280943 DEBUG oslo_concurrency.lockutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:59:38 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:59:38 localhost nova_compute[280939]: 2025-11-23 09:59:38.523 280943 DEBUG oslo_concurrency.lockutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Acquiring lock "refresh_cache-8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 23 04:59:38 localhost nova_compute[280939]: 2025-11-23 09:59:38.524 280943 DEBUG oslo_concurrency.lockutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Acquired lock "refresh_cache-8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 23 04:59:38 localhost nova_compute[280939]: 2025-11-23 09:59:38.524 280943 DEBUG nova.network.neutron [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Nov 23 04:59:38 localhost nova_compute[280939]: 2025-11-23 09:59:38.589 280943 DEBUG nova.network.neutron [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Nov 23 04:59:38 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.157 280943 DEBUG nova.network.neutron [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.183 280943 DEBUG oslo_concurrency.lockutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Releasing lock "refresh_cache-8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.185 280943 DEBUG nova.virt.libvirt.driver [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.185 280943 INFO nova.virt.libvirt.driver [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Creating image(s)#033[00m Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.224 280943 DEBUG nova.storage.rbd_utils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] rbd image 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.229 280943 DEBUG nova.objects.instance [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.279 280943 DEBUG nova.storage.rbd_utils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] rbd image 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:59:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v151: 177 pgs: 177 active+clean; 226 MiB data, 891 MiB used, 41 GiB / 42 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 295 op/s Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.355 280943 DEBUG nova.storage.rbd_utils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] rbd image 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.360 280943 DEBUG oslo_concurrency.lockutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Acquiring lock "27fd3a562038c818df444066f8a6441ee21cc7dd" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.361 280943 DEBUG oslo_concurrency.lockutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Lock "27fd3a562038c818df444066f8a6441ee21cc7dd" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.366 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.397 280943 DEBUG nova.virt.libvirt.imagebackend [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Image locations are: [{'url': 'rbd://46550e70-79cb-5f55-bf6d-1204b97e083b/images/b6d724dc-26d8-4b53-bc02-990c8b280c9a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://46550e70-79cb-5f55-bf6d-1204b97e083b/images/b6d724dc-26d8-4b53-bc02-990c8b280c9a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.480 280943 DEBUG nova.virt.libvirt.imagebackend [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Selected location: {'url': 'rbd://46550e70-79cb-5f55-bf6d-1204b97e083b/images/b6d724dc-26d8-4b53-bc02-990c8b280c9a/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.481 280943 DEBUG nova.storage.rbd_utils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] cloning images/b6d724dc-26d8-4b53-bc02-990c8b280c9a@snap to None/8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.664 280943 DEBUG oslo_concurrency.lockutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Lock "27fd3a562038c818df444066f8a6441ee21cc7dd" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.860 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.868 280943 DEBUG nova.objects.instance [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Lazy-loading 'migration_context' on Instance uuid 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 23 04:59:39 localhost nova_compute[280939]: 2025-11-23 09:59:39.958 280943 DEBUG nova.storage.rbd_utils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] flattening vms/8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Nov 23 04:59:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e107 do_prune osdmap full prune enabled Nov 23 04:59:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e108 e108: 6 total, 6 up, 6 in Nov 23 04:59:40 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e108: 6 total, 6 up, 6 in Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.719 280943 DEBUG nova.virt.libvirt.driver [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Image rbd:vms/8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.720 280943 DEBUG nova.virt.libvirt.driver [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.721 280943 DEBUG nova.virt.libvirt.driver [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Ensure instance console log exists: /var/lib/nova/instances/8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.721 280943 DEBUG oslo_concurrency.lockutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.722 280943 DEBUG oslo_concurrency.lockutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.722 280943 DEBUG oslo_concurrency.lockutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.725 280943 DEBUG nova.virt.libvirt.driver [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-23T09:59:16Z,direct_url=,disk_format='raw',id=b6d724dc-26d8-4b53-bc02-990c8b280c9a,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-2005076685-shelved',owner='37a58b702f564a81ab5a59cf4201b4f0',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2025-11-23T09:59:34Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'device_type': 'disk', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'size': 0, 'image_id': 'c5806483-57a8-4254-b41b-254b888c8606'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.730 280943 WARNING nova.virt.libvirt.driver [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.732 280943 DEBUG nova.virt.libvirt.host [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Searching host: 'np0005532584.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.733 280943 DEBUG nova.virt.libvirt.host [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.735 280943 DEBUG nova.virt.libvirt.host [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Searching host: 'np0005532584.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.736 280943 DEBUG nova.virt.libvirt.host [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.736 280943 DEBUG nova.virt.libvirt.driver [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.737 280943 DEBUG nova.virt.hardware [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T09:56:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43b374b4-75d9-47f9-aa6b-ddb1a45f7c04',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-23T09:59:16Z,direct_url=,disk_format='raw',id=b6d724dc-26d8-4b53-bc02-990c8b280c9a,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-2005076685-shelved',owner='37a58b702f564a81ab5a59cf4201b4f0',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2025-11-23T09:59:34Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.737 280943 DEBUG nova.virt.hardware [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.738 280943 DEBUG nova.virt.hardware [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.738 280943 DEBUG nova.virt.hardware [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.738 280943 DEBUG nova.virt.hardware [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.738 280943 DEBUG nova.virt.hardware [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.739 280943 DEBUG nova.virt.hardware [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.739 280943 DEBUG nova.virt.hardware [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.739 280943 DEBUG nova.virt.hardware [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.740 280943 DEBUG nova.virt.hardware [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.740 280943 DEBUG nova.virt.hardware [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.740 280943 DEBUG nova.objects.instance [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 23 04:59:40 localhost nova_compute[280939]: 2025-11-23 09:59:40.769 280943 DEBUG oslo_concurrency.processutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:59:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 04:59:41 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/561663991' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 04:59:41 localhost nova_compute[280939]: 2025-11-23 09:59:41.271 280943 DEBUG oslo_concurrency.processutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:59:41 localhost nova_compute[280939]: 2025-11-23 09:59:41.305 280943 DEBUG nova.storage.rbd_utils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] rbd image 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:59:41 localhost nova_compute[280939]: 2025-11-23 09:59:41.310 280943 DEBUG oslo_concurrency.processutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:59:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 177 active+clean; 226 MiB data, 891 MiB used, 41 GiB / 42 GiB avail; 68 KiB/s rd, 4.7 KiB/s wr, 96 op/s Nov 23 04:59:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e108 do_prune osdmap full prune enabled Nov 23 04:59:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e109 e109: 6 total, 6 up, 6 in Nov 23 04:59:41 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e109: 6 total, 6 up, 6 in Nov 23 04:59:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 04:59:41 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/694137683' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 04:59:41 localhost nova_compute[280939]: 2025-11-23 09:59:41.718 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:41 localhost nova_compute[280939]: 2025-11-23 09:59:41.731 280943 DEBUG oslo_concurrency.processutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:59:41 localhost nova_compute[280939]: 2025-11-23 09:59:41.733 280943 DEBUG nova.objects.instance [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 23 04:59:41 localhost nova_compute[280939]: 2025-11-23 09:59:41.760 280943 DEBUG nova.virt.libvirt.driver [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] End _get_guest_xml xml= Nov 23 04:59:41 localhost nova_compute[280939]: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7 Nov 23 04:59:41 localhost nova_compute[280939]: instance-00000009 Nov 23 04:59:41 localhost nova_compute[280939]: 131072 Nov 23 04:59:41 localhost nova_compute[280939]: 1 Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: tempest-UnshelveToHostMultiNodesTest-server-2005076685 Nov 23 04:59:41 localhost nova_compute[280939]: 2025-11-23 09:59:40 Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: 128 Nov 23 04:59:41 localhost nova_compute[280939]: 1 Nov 23 04:59:41 localhost nova_compute[280939]: 0 Nov 23 04:59:41 localhost nova_compute[280939]: 0 Nov 23 04:59:41 localhost nova_compute[280939]: 1 Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: tempest-UnshelveToHostMultiNodesTest-612486733-project-member Nov 23 04:59:41 localhost nova_compute[280939]: tempest-UnshelveToHostMultiNodesTest-612486733 Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: RDO Nov 23 04:59:41 localhost nova_compute[280939]: OpenStack Compute Nov 23 04:59:41 localhost nova_compute[280939]: 27.5.2-0.20250829104910.6f8decf.el9 Nov 23 04:59:41 localhost nova_compute[280939]: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7 Nov 23 04:59:41 localhost nova_compute[280939]: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7 Nov 23 04:59:41 localhost nova_compute[280939]: Virtual Machine Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: hvm Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: /dev/urandom Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: Nov 23 04:59:41 localhost nova_compute[280939]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Nov 23 04:59:41 localhost nova_compute[280939]: 2025-11-23 09:59:41.821 280943 DEBUG nova.virt.libvirt.driver [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Nov 23 04:59:41 localhost nova_compute[280939]: 2025-11-23 09:59:41.822 280943 DEBUG nova.virt.libvirt.driver [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Nov 23 04:59:41 localhost nova_compute[280939]: 2025-11-23 09:59:41.823 280943 INFO nova.virt.libvirt.driver [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Using config drive#033[00m Nov 23 04:59:41 localhost nova_compute[280939]: 2025-11-23 09:59:41.863 280943 DEBUG nova.storage.rbd_utils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] rbd image 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:59:41 localhost nova_compute[280939]: 2025-11-23 09:59:41.907 280943 DEBUG nova.objects.instance [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 23 04:59:41 localhost nova_compute[280939]: 2025-11-23 09:59:41.986 280943 DEBUG nova.objects.instance [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Lazy-loading 'keypairs' on Instance uuid 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.082 280943 INFO nova.virt.libvirt.driver [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Creating config drive at /var/lib/nova/instances/8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7/disk.config#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.087 280943 DEBUG oslo_concurrency.processutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdu9syanr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.218 280943 DEBUG oslo_concurrency.processutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdu9syanr" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.260 280943 DEBUG nova.storage.rbd_utils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] rbd image 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.265 280943 DEBUG oslo_concurrency.processutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7/disk.config 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:59:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:59:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e109 do_prune osdmap full prune enabled Nov 23 04:59:42 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:42.482 262301 INFO neutron.agent.linux.ip_lib [None req-bd317611-8195-4139-ae4d-a26e2942a918 - - - - - -] Device tapbbb0c303-03 cannot be used as it has no MAC address#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.486 280943 DEBUG oslo_concurrency.processutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7/disk.config 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.488 280943 INFO nova.virt.libvirt.driver [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Deleting local config drive /var/lib/nova/instances/8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7/disk.config because it was imported into RBD.#033[00m Nov 23 04:59:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e110 e110: 6 total, 6 up, 6 in Nov 23 04:59:42 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e110: 6 total, 6 up, 6 in Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.558 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:42 localhost kernel: device tapbbb0c303-03 entered promiscuous mode Nov 23 04:59:42 localhost NetworkManager[5966]: [1763891982.5669] manager: (tapbbb0c303-03): new Generic device (/org/freedesktop/NetworkManager/Devices/27) Nov 23 04:59:42 localhost ovn_controller[153771]: 2025-11-23T09:59:42Z|00137|binding|INFO|Claiming lport bbb0c303-03ab-4253-908f-047baa611770 for this chassis. Nov 23 04:59:42 localhost ovn_controller[153771]: 2025-11-23T09:59:42Z|00138|binding|INFO|bbb0c303-03ab-4253-908f-047baa611770: Claiming unknown Nov 23 04:59:42 localhost systemd-udevd[313355]: Network interface NamePolicy= disabled on kernel command line. Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.573 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:42 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:42.578 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f2e2f7f2c0054b9bb54cb70bfb2267e5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba523834-7b93-4800-9f1e-51517bdea478, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=bbb0c303-03ab-4253-908f-047baa611770) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:59:42 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:42.579 159415 INFO neutron.agent.ovn.metadata.agent [-] Port bbb0c303-03ab-4253-908f-047baa611770 in datapath 05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2 bound to our chassis#033[00m Nov 23 04:59:42 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:42.580 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port ed5763f4-8184-433a-a958-5431bf1bd14f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 04:59:42 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:42.580 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 04:59:42 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:42.581 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[869a2ad3-9d61-42a4-b14f-aa4e09341213]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:59:42 localhost journal[229336]: ethtool ioctl error on tapbbb0c303-03: No such device Nov 23 04:59:42 localhost ovn_controller[153771]: 2025-11-23T09:59:42Z|00139|binding|INFO|Setting lport bbb0c303-03ab-4253-908f-047baa611770 ovn-installed in OVS Nov 23 04:59:42 localhost ovn_controller[153771]: 2025-11-23T09:59:42Z|00140|binding|INFO|Setting lport bbb0c303-03ab-4253-908f-047baa611770 up in Southbound Nov 23 04:59:42 localhost journal[229336]: ethtool ioctl error on tapbbb0c303-03: No such device Nov 23 04:59:42 localhost systemd-machined[202731]: New machine qemu-4-instance-00000009. Nov 23 04:59:42 localhost journal[229336]: ethtool ioctl error on tapbbb0c303-03: No such device Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.609 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:42 localhost journal[229336]: ethtool ioctl error on tapbbb0c303-03: No such device Nov 23 04:59:42 localhost journal[229336]: ethtool ioctl error on tapbbb0c303-03: No such device Nov 23 04:59:42 localhost journal[229336]: ethtool ioctl error on tapbbb0c303-03: No such device Nov 23 04:59:42 localhost systemd[1]: Started Virtual Machine qemu-4-instance-00000009. Nov 23 04:59:42 localhost journal[229336]: ethtool ioctl error on tapbbb0c303-03: No such device Nov 23 04:59:42 localhost journal[229336]: ethtool ioctl error on tapbbb0c303-03: No such device Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.636 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.664 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.939 280943 DEBUG nova.virt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.940 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] VM Resumed (Lifecycle Event)#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.943 280943 DEBUG nova.compute.manager [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.943 280943 DEBUG nova.virt.libvirt.driver [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.947 280943 INFO nova.virt.libvirt.driver [-] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Instance spawned successfully.#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.962 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.965 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.993 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.994 280943 DEBUG nova.virt.driver [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 23 04:59:42 localhost nova_compute[280939]: 2025-11-23 09:59:42.994 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] VM Started (Lifecycle Event)#033[00m Nov 23 04:59:43 localhost nova_compute[280939]: 2025-11-23 09:59:43.012 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:59:43 localhost nova_compute[280939]: 2025-11-23 09:59:43.016 280943 DEBUG nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Nov 23 04:59:43 localhost nova_compute[280939]: 2025-11-23 09:59:43.037 280943 INFO nova.compute.manager [None req-03a02a65-86d7-49c0-b496-9713a012f3d9 - - - - - -] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Nov 23 04:59:43 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:43.334 2 INFO neutron.agent.securitygroups_rpc [None req-6713e12c-736e-4c48-95b8-a64782f68ffc 32512604c08f4fa48e6e985a3f6cd6d1 79509bc833494f3598e01347dc55dea9 - - default default] Security group member updated ['cfab2162-6afe-48a0-9f05-cee7f160244c']#033[00m Nov 23 04:59:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v156: 177 pgs: 177 active+clean; 226 MiB data, 891 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 4.8 KiB/s wr, 98 op/s Nov 23 04:59:43 localhost podman[313480]: Nov 23 04:59:43 localhost podman[313480]: 2025-11-23 09:59:43.502839361 +0000 UTC m=+0.118459671 container create af7b7aff59077d3fa527bb3f8693157275dc24e3f307120f0bc410421574e60d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 04:59:43 localhost nova_compute[280939]: 2025-11-23 09:59:43.516 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:43 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:43.516 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:59:43 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:43.518 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 04:59:43 localhost podman[313480]: 2025-11-23 09:59:43.442202648 +0000 UTC m=+0.057822978 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 04:59:43 localhost systemd[1]: Started libpod-conmon-af7b7aff59077d3fa527bb3f8693157275dc24e3f307120f0bc410421574e60d.scope. Nov 23 04:59:43 localhost systemd[1]: Started libcrun container. Nov 23 04:59:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb367043772ff4083c54a170564eeb7eb0c38258a01084fdf12325548e8c2ee0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 04:59:43 localhost podman[313480]: 2025-11-23 09:59:43.597141366 +0000 UTC m=+0.212761676 container init af7b7aff59077d3fa527bb3f8693157275dc24e3f307120f0bc410421574e60d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0) Nov 23 04:59:43 localhost podman[313480]: 2025-11-23 09:59:43.610734714 +0000 UTC m=+0.226355024 container start af7b7aff59077d3fa527bb3f8693157275dc24e3f307120f0bc410421574e60d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 04:59:43 localhost dnsmasq[313498]: started, version 2.85 cachesize 150 Nov 23 04:59:43 localhost dnsmasq[313498]: DNS service limited to local subnets Nov 23 04:59:43 localhost dnsmasq[313498]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 04:59:43 localhost dnsmasq[313498]: warning: no upstream servers configured Nov 23 04:59:43 localhost dnsmasq-dhcp[313498]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 04:59:43 localhost dnsmasq[313498]: read /var/lib/neutron/dhcp/05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2/addn_hosts - 0 addresses Nov 23 04:59:43 localhost dnsmasq-dhcp[313498]: read /var/lib/neutron/dhcp/05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2/host Nov 23 04:59:43 localhost dnsmasq-dhcp[313498]: read /var/lib/neutron/dhcp/05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2/opts Nov 23 04:59:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e110 do_prune osdmap full prune enabled Nov 23 04:59:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e111 e111: 6 total, 6 up, 6 in Nov 23 04:59:43 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e111: 6 total, 6 up, 6 in Nov 23 04:59:43 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:43.810 262301 INFO neutron.agent.dhcp.agent [None req-7c1fcfbf-c987-4026-8fe8-7d717d461ab0 - - - - - -] DHCP configuration for ports {'3642f83e-7828-4286-b89a-7e9d6630754f'} is completed#033[00m Nov 23 04:59:44 localhost nova_compute[280939]: 2025-11-23 09:59:44.339 280943 DEBUG nova.compute.manager [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 04:59:44 localhost nova_compute[280939]: 2025-11-23 09:59:44.426 280943 DEBUG oslo_concurrency.lockutils [None req-bdaf83d7-d6c2-41ad-b559-d434640e18d8 25162d9a482a4cd38c99a8e94bc63cc3 e5a4b2286a4a475887ed51bb4020d980 - - default default] Lock "8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7" "released" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: held 6.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:59:44 localhost nova_compute[280939]: 2025-11-23 09:59:44.886 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v158: 177 pgs: 177 active+clean; 226 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 12 MiB/s rd, 10 MiB/s wr, 422 op/s Nov 23 04:59:45 localhost nova_compute[280939]: 2025-11-23 09:59:45.696 280943 DEBUG oslo_concurrency.lockutils [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Acquiring lock "8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:59:45 localhost nova_compute[280939]: 2025-11-23 09:59:45.697 280943 DEBUG oslo_concurrency.lockutils [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Lock "8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:59:45 localhost nova_compute[280939]: 2025-11-23 09:59:45.697 280943 DEBUG oslo_concurrency.lockutils [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Acquiring lock "8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:59:45 localhost nova_compute[280939]: 2025-11-23 09:59:45.698 280943 DEBUG oslo_concurrency.lockutils [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Lock "8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:59:45 localhost nova_compute[280939]: 2025-11-23 09:59:45.698 280943 DEBUG oslo_concurrency.lockutils [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Lock "8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:59:45 localhost nova_compute[280939]: 2025-11-23 09:59:45.700 280943 INFO nova.compute.manager [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Terminating instance#033[00m Nov 23 04:59:45 localhost nova_compute[280939]: 2025-11-23 09:59:45.701 280943 DEBUG oslo_concurrency.lockutils [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Acquiring lock "refresh_cache-8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 23 04:59:45 localhost nova_compute[280939]: 2025-11-23 09:59:45.702 280943 DEBUG oslo_concurrency.lockutils [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Acquired lock "refresh_cache-8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 23 04:59:45 localhost nova_compute[280939]: 2025-11-23 09:59:45.702 280943 DEBUG nova.network.neutron [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Nov 23 04:59:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 04:59:45 localhost nova_compute[280939]: 2025-11-23 09:59:45.823 280943 DEBUG nova.network.neutron [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Nov 23 04:59:45 localhost podman[313499]: 2025-11-23 09:59:45.921768868 +0000 UTC m=+0.105110739 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent) Nov 23 04:59:45 localhost podman[313499]: 2025-11-23 09:59:45.932504988 +0000 UTC m=+0.115846929 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Nov 23 04:59:45 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 04:59:46 localhost nova_compute[280939]: 2025-11-23 09:59:46.161 280943 DEBUG nova.network.neutron [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 23 04:59:46 localhost nova_compute[280939]: 2025-11-23 09:59:46.184 280943 DEBUG oslo_concurrency.lockutils [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Releasing lock "refresh_cache-8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 23 04:59:46 localhost nova_compute[280939]: 2025-11-23 09:59:46.185 280943 DEBUG nova.compute.manager [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Nov 23 04:59:46 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Deactivated successfully. Nov 23 04:59:46 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Consumed 3.753s CPU time. Nov 23 04:59:46 localhost systemd-machined[202731]: Machine qemu-4-instance-00000009 terminated. Nov 23 04:59:46 localhost nova_compute[280939]: 2025-11-23 09:59:46.407 280943 INFO nova.virt.libvirt.driver [-] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Instance destroyed successfully.#033[00m Nov 23 04:59:46 localhost nova_compute[280939]: 2025-11-23 09:59:46.408 280943 DEBUG nova.objects.instance [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Lazy-loading 'resources' on Instance uuid 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 23 04:59:46 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:46.520 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 04:59:46 localhost nova_compute[280939]: 2025-11-23 09:59:46.718 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.056 280943 INFO nova.virt.libvirt.driver [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Deleting instance files /var/lib/nova/instances/8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7_del#033[00m Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.057 280943 INFO nova.virt.libvirt.driver [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Deletion of /var/lib/nova/instances/8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7_del complete#033[00m Nov 23 04:59:47 localhost podman[239764]: time="2025-11-23T09:59:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 04:59:47 localhost podman[239764]: @ - - [23/Nov/2025:09:59:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156321 "" "Go-http-client/1.1" Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.132 280943 INFO nova.compute.manager [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.133 280943 DEBUG oslo.service.loopingcall [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.133 280943 DEBUG nova.compute.manager [-] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.134 280943 DEBUG nova.network.neutron [-] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Nov 23 04:59:47 localhost podman[239764]: @ - - [23/Nov/2025:09:59:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19198 "" "Go-http-client/1.1" Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.183 280943 DEBUG nova.network.neutron [-] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.202 280943 DEBUG nova.network.neutron [-] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.217 280943 INFO nova.compute.manager [-] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Took 0.08 seconds to deallocate network for instance.#033[00m Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.262 280943 DEBUG oslo_concurrency.lockutils [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.263 280943 DEBUG oslo_concurrency.lockutils [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.319 280943 DEBUG oslo_concurrency.processutils [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 04:59:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v159: 177 pgs: 177 active+clean; 226 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 9.2 MiB/s rd, 7.8 MiB/s wr, 329 op/s Nov 23 04:59:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:59:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e111 do_prune osdmap full prune enabled Nov 23 04:59:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 e112: 6 total, 6 up, 6 in Nov 23 04:59:47 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e112: 6 total, 6 up, 6 in Nov 23 04:59:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 04:59:47 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1107415024' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.786 280943 DEBUG oslo_concurrency.processutils [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.794 280943 DEBUG nova.compute.provider_tree [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.810 280943 DEBUG nova.scheduler.client.report [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.831 280943 DEBUG oslo_concurrency.lockutils [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.859 280943 INFO nova.scheduler.client.report [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Deleted allocations for instance 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7#033[00m Nov 23 04:59:47 localhost nova_compute[280939]: 2025-11-23 09:59:47.946 280943 DEBUG oslo_concurrency.lockutils [None req-effd4a65-b2dc-483e-a9b4-add0ff49e912 55581f20ed8d4bd8a61a81c525ca8141 37a58b702f564a81ab5a59cf4201b4f0 - - default default] Lock "8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 2.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 04:59:48 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:48.030 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:47Z, description=, device_id=41b7469d-e467-4911-9b99-bab5a0773a8f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c2c6a9b0-0962-44f6-9e88-011f46e905b4, ip_allocation=immediate, mac_address=fa:16:3e:98:20:06, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:59:40Z, description=, dns_domain=, id=05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-DeleteServersTestJSON-2128764113-network, port_security_enabled=True, project_id=f2e2f7f2c0054b9bb54cb70bfb2267e5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=6770, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1053, status=ACTIVE, subnets=['70014f65-591c-4502-9099-8d2ac9e80bb8'], tags=[], tenant_id=f2e2f7f2c0054b9bb54cb70bfb2267e5, updated_at=2025-11-23T09:59:40Z, vlan_transparent=None, network_id=05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2, port_security_enabled=False, project_id=f2e2f7f2c0054b9bb54cb70bfb2267e5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1108, status=DOWN, tags=[], tenant_id=f2e2f7f2c0054b9bb54cb70bfb2267e5, updated_at=2025-11-23T09:59:47Z on network 05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2#033[00m Nov 23 04:59:48 localhost podman[313579]: 2025-11-23 09:59:48.243704556 +0000 UTC m=+0.059023174 container kill af7b7aff59077d3fa527bb3f8693157275dc24e3f307120f0bc410421574e60d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Nov 23 04:59:48 localhost dnsmasq[313498]: read /var/lib/neutron/dhcp/05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2/addn_hosts - 1 addresses Nov 23 04:59:48 localhost dnsmasq-dhcp[313498]: read /var/lib/neutron/dhcp/05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2/host Nov 23 04:59:48 localhost dnsmasq-dhcp[313498]: read /var/lib/neutron/dhcp/05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2/opts Nov 23 04:59:48 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:48.496 262301 INFO neutron.agent.dhcp.agent [None req-4c155a61-d812-461b-9c82-80c96aef46bf - - - - - -] DHCP configuration for ports {'c2c6a9b0-0962-44f6-9e88-011f46e905b4'} is completed#033[00m Nov 23 04:59:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v161: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 10 MiB/s rd, 6.8 MiB/s wr, 409 op/s Nov 23 04:59:49 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:49.573 2 INFO neutron.agent.securitygroups_rpc [None req-061cdcce-87b3-4fab-8b64-8613c3b5bd77 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 04:59:49 localhost nova_compute[280939]: 2025-11-23 09:59:49.911 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:50 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:50.738 2 INFO neutron.agent.securitygroups_rpc [None req-5782673d-3ad2-4525-b0b7-33b67eb33956 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 04:59:51 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:51.233 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:47Z, description=, device_id=41b7469d-e467-4911-9b99-bab5a0773a8f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c2c6a9b0-0962-44f6-9e88-011f46e905b4, ip_allocation=immediate, mac_address=fa:16:3e:98:20:06, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:59:40Z, description=, dns_domain=, id=05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-DeleteServersTestJSON-2128764113-network, port_security_enabled=True, project_id=f2e2f7f2c0054b9bb54cb70bfb2267e5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=6770, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1053, status=ACTIVE, subnets=['70014f65-591c-4502-9099-8d2ac9e80bb8'], tags=[], tenant_id=f2e2f7f2c0054b9bb54cb70bfb2267e5, updated_at=2025-11-23T09:59:40Z, vlan_transparent=None, network_id=05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2, port_security_enabled=False, project_id=f2e2f7f2c0054b9bb54cb70bfb2267e5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1108, status=DOWN, tags=[], tenant_id=f2e2f7f2c0054b9bb54cb70bfb2267e5, updated_at=2025-11-23T09:59:47Z on network 05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2#033[00m Nov 23 04:59:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v162: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 351 op/s Nov 23 04:59:51 localhost dnsmasq[313498]: read /var/lib/neutron/dhcp/05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2/addn_hosts - 1 addresses Nov 23 04:59:51 localhost dnsmasq-dhcp[313498]: read /var/lib/neutron/dhcp/05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2/host Nov 23 04:59:51 localhost systemd[1]: tmp-crun.9sKtJR.mount: Deactivated successfully. Nov 23 04:59:51 localhost dnsmasq-dhcp[313498]: read /var/lib/neutron/dhcp/05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2/opts Nov 23 04:59:51 localhost podman[313617]: 2025-11-23 09:59:51.413093295 +0000 UTC m=+0.045217879 container kill af7b7aff59077d3fa527bb3f8693157275dc24e3f307120f0bc410421574e60d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:59:51 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:51.673 262301 INFO neutron.agent.dhcp.agent [None req-d1929c67-3c1f-40bc-9be5-634072e2d2a8 - - - - - -] DHCP configuration for ports {'c2c6a9b0-0962-44f6-9e88-011f46e905b4'} is completed#033[00m Nov 23 04:59:51 localhost nova_compute[280939]: 2025-11-23 09:59:51.721 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:52 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:59:52 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:52.636 2 INFO neutron.agent.securitygroups_rpc [None req-633bd2af-73f5-42be-a8e1-16475aa1b324 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 04:59:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 04:59:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 04:59:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 04:59:52 localhost podman[313638]: 2025-11-23 09:59:52.909690143 +0000 UTC m=+0.090537923 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3) Nov 23 04:59:52 localhost podman[313640]: 2025-11-23 09:59:52.952315932 +0000 UTC m=+0.126847667 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2) Nov 23 04:59:53 localhost systemd[1]: tmp-crun.BdrAvN.mount: Deactivated successfully. Nov 23 04:59:53 localhost podman[313639]: 2025-11-23 09:59:53.012268373 +0000 UTC m=+0.191378589 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 04:59:53 localhost podman[313640]: 2025-11-23 09:59:53.016448412 +0000 UTC m=+0.190980137 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Nov 23 04:59:53 localhost podman[313638]: 2025-11-23 09:59:53.027975016 +0000 UTC m=+0.208822736 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0) Nov 23 04:59:53 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 04:59:53 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 04:59:53 localhost podman[313639]: 2025-11-23 09:59:53.055411829 +0000 UTC m=+0.234522045 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 04:59:53 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 04:59:53 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:53.174 2 INFO neutron.agent.securitygroups_rpc [None req-33fb7598-4629-4242-b064-9d05bdc1e723 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 04:59:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v163: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 7.3 MiB/s rd, 4.9 MiB/s wr, 291 op/s Nov 23 04:59:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:59:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:59:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:59:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:59:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 04:59:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 04:59:53 localhost ovn_controller[153771]: 2025-11-23T09:59:53Z|00141|ovn_bfd|INFO|Enabled BFD on interface ovn-c237ed-0 Nov 23 04:59:53 localhost ovn_controller[153771]: 2025-11-23T09:59:53Z|00142|ovn_bfd|INFO|Enabled BFD on interface ovn-49b8a0-0 Nov 23 04:59:53 localhost ovn_controller[153771]: 2025-11-23T09:59:53Z|00143|ovn_bfd|INFO|Enabled BFD on interface ovn-b7d5b3-0 Nov 23 04:59:53 localhost nova_compute[280939]: 2025-11-23 09:59:53.504 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:53 localhost nova_compute[280939]: 2025-11-23 09:59:53.523 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:53 localhost nova_compute[280939]: 2025-11-23 09:59:53.530 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:53 localhost nova_compute[280939]: 2025-11-23 09:59:53.538 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:53 localhost nova_compute[280939]: 2025-11-23 09:59:53.550 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:53 localhost nova_compute[280939]: 2025-11-23 09:59:53.578 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:54 localhost snmpd[67609]: empty variable list in _query Nov 23 04:59:54 localhost snmpd[67609]: empty variable list in _query Nov 23 04:59:54 localhost nova_compute[280939]: 2025-11-23 09:59:54.945 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:55 localhost nova_compute[280939]: 2025-11-23 09:59:55.218 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:55 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:55.251 262301 INFO neutron.agent.linux.ip_lib [None req-364d5c3e-90a4-4af6-820d-e8498d897aac - - - - - -] Device tapcc39d86a-56 cannot be used as it has no MAC address#033[00m Nov 23 04:59:55 localhost nova_compute[280939]: 2025-11-23 09:59:55.273 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:55 localhost kernel: device tapcc39d86a-56 entered promiscuous mode Nov 23 04:59:55 localhost NetworkManager[5966]: [1763891995.2819] manager: (tapcc39d86a-56): new Generic device (/org/freedesktop/NetworkManager/Devices/28) Nov 23 04:59:55 localhost ovn_controller[153771]: 2025-11-23T09:59:55Z|00144|binding|INFO|Claiming lport cc39d86a-562a-496f-bb57-78315ac8f6ac for this chassis. Nov 23 04:59:55 localhost ovn_controller[153771]: 2025-11-23T09:59:55Z|00145|binding|INFO|cc39d86a-562a-496f-bb57-78315ac8f6ac: Claiming unknown Nov 23 04:59:55 localhost systemd-udevd[313716]: Network interface NamePolicy= disabled on kernel command line. Nov 23 04:59:55 localhost nova_compute[280939]: 2025-11-23 09:59:55.284 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:55 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:55.298 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-033d8290-1897-4498-84d2-be9ceedec80f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-033d8290-1897-4498-84d2-be9ceedec80f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ffd5730cfc54429a6af666c4ae63fe7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ab7703a-ee52-4da3-b8bf-3579b6ae6a65, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=cc39d86a-562a-496f-bb57-78315ac8f6ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:59:55 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:55.300 159415 INFO neutron.agent.ovn.metadata.agent [-] Port cc39d86a-562a-496f-bb57-78315ac8f6ac in datapath 033d8290-1897-4498-84d2-be9ceedec80f bound to our chassis#033[00m Nov 23 04:59:55 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:55.303 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port f5105594-21f4-4057-9457-930077cb8c66 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 04:59:55 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:55.303 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 033d8290-1897-4498-84d2-be9ceedec80f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 04:59:55 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:55.304 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[58457141-aa79-4b92-9837-c6176c9e97e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:59:55 localhost journal[229336]: ethtool ioctl error on tapcc39d86a-56: No such device Nov 23 04:59:55 localhost journal[229336]: ethtool ioctl error on tapcc39d86a-56: No such device Nov 23 04:59:55 localhost ovn_controller[153771]: 2025-11-23T09:59:55Z|00146|binding|INFO|Setting lport cc39d86a-562a-496f-bb57-78315ac8f6ac ovn-installed in OVS Nov 23 04:59:55 localhost ovn_controller[153771]: 2025-11-23T09:59:55Z|00147|binding|INFO|Setting lport cc39d86a-562a-496f-bb57-78315ac8f6ac up in Southbound Nov 23 04:59:55 localhost nova_compute[280939]: 2025-11-23 09:59:55.326 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:55 localhost journal[229336]: ethtool ioctl error on tapcc39d86a-56: No such device Nov 23 04:59:55 localhost journal[229336]: ethtool ioctl error on tapcc39d86a-56: No such device Nov 23 04:59:55 localhost journal[229336]: ethtool ioctl error on tapcc39d86a-56: No such device Nov 23 04:59:55 localhost journal[229336]: ethtool ioctl error on tapcc39d86a-56: No such device Nov 23 04:59:55 localhost journal[229336]: ethtool ioctl error on tapcc39d86a-56: No such device Nov 23 04:59:55 localhost nova_compute[280939]: 2025-11-23 09:59:55.346 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:55 localhost nova_compute[280939]: 2025-11-23 09:59:55.350 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:55 localhost journal[229336]: ethtool ioctl error on tapcc39d86a-56: No such device Nov 23 04:59:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v164: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 1.6 MiB/s rd, 1.4 KiB/s wr, 83 op/s Nov 23 04:59:55 localhost nova_compute[280939]: 2025-11-23 09:59:55.363 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:55 localhost nova_compute[280939]: 2025-11-23 09:59:55.391 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:55 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:55.424 2 INFO neutron.agent.securitygroups_rpc [None req-4a1b8c07-dec2-4711-92de-c07233183ccc 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 04:59:56 localhost podman[313787]: Nov 23 04:59:56 localhost podman[313787]: 2025-11-23 09:59:56.22297296 +0000 UTC m=+0.087430537 container create 9a392660df1f1a058eeed902a30ca021e4ce97640174ab27bfd9819bcec9269d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-033d8290-1897-4498-84d2-be9ceedec80f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:59:56 localhost systemd[1]: Started libpod-conmon-9a392660df1f1a058eeed902a30ca021e4ce97640174ab27bfd9819bcec9269d.scope. Nov 23 04:59:56 localhost podman[313787]: 2025-11-23 09:59:56.180231617 +0000 UTC m=+0.044689244 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 04:59:56 localhost systemd[1]: Started libcrun container. Nov 23 04:59:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d371333f2bcd245727a7d34f26bd6f25857a5d4f1db991e69172ad50d6eb78e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 04:59:56 localhost podman[313787]: 2025-11-23 09:59:56.300967706 +0000 UTC m=+0.165425283 container init 9a392660df1f1a058eeed902a30ca021e4ce97640174ab27bfd9819bcec9269d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-033d8290-1897-4498-84d2-be9ceedec80f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118) Nov 23 04:59:56 localhost podman[313787]: 2025-11-23 09:59:56.309709754 +0000 UTC m=+0.174167331 container start 9a392660df1f1a058eeed902a30ca021e4ce97640174ab27bfd9819bcec9269d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-033d8290-1897-4498-84d2-be9ceedec80f, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 04:59:56 localhost dnsmasq[313803]: started, version 2.85 cachesize 150 Nov 23 04:59:56 localhost dnsmasq[313803]: DNS service limited to local subnets Nov 23 04:59:56 localhost dnsmasq[313803]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 04:59:56 localhost dnsmasq[313803]: warning: no upstream servers configured Nov 23 04:59:56 localhost dnsmasq-dhcp[313803]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 04:59:56 localhost dnsmasq[313803]: read /var/lib/neutron/dhcp/033d8290-1897-4498-84d2-be9ceedec80f/addn_hosts - 0 addresses Nov 23 04:59:56 localhost dnsmasq-dhcp[313803]: read /var/lib/neutron/dhcp/033d8290-1897-4498-84d2-be9ceedec80f/host Nov 23 04:59:56 localhost dnsmasq-dhcp[313803]: read /var/lib/neutron/dhcp/033d8290-1897-4498-84d2-be9ceedec80f/opts Nov 23 04:59:56 localhost neutron_dhcp_agent[262297]: 2025-11-23 09:59:56.444 262301 INFO neutron.agent.dhcp.agent [None req-a6b95018-1a97-47e2-99f9-72a079756714 - - - - - -] DHCP configuration for ports {'044d6e5f-a9c9-4243-8db1-16ebd29616dc'} is completed#033[00m Nov 23 04:59:56 localhost nova_compute[280939]: 2025-11-23 09:59:56.724 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v165: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 1.6 MiB/s rd, 1.4 KiB/s wr, 83 op/s Nov 23 04:59:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 04:59:57 localhost neutron_sriov_agent[255165]: 2025-11-23 09:59:57.983 2 INFO neutron.agent.securitygroups_rpc [None req-d239cbef-5e7b-4e18-8195-2f02667a16df 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 04:59:58 localhost ovn_controller[153771]: 2025-11-23T09:59:58Z|00148|ovn_bfd|INFO|Disabled BFD on interface ovn-c237ed-0 Nov 23 04:59:58 localhost ovn_controller[153771]: 2025-11-23T09:59:58Z|00149|ovn_bfd|INFO|Disabled BFD on interface ovn-49b8a0-0 Nov 23 04:59:58 localhost ovn_controller[153771]: 2025-11-23T09:59:58Z|00150|ovn_bfd|INFO|Disabled BFD on interface ovn-b7d5b3-0 Nov 23 04:59:58 localhost nova_compute[280939]: 2025-11-23 09:59:58.082 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:58 localhost nova_compute[280939]: 2025-11-23 09:59:58.084 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:58 localhost nova_compute[280939]: 2025-11-23 09:59:58.107 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:58 localhost dnsmasq[313498]: read /var/lib/neutron/dhcp/05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2/addn_hosts - 0 addresses Nov 23 04:59:58 localhost dnsmasq-dhcp[313498]: read /var/lib/neutron/dhcp/05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2/host Nov 23 04:59:58 localhost podman[313821]: 2025-11-23 09:59:58.228078557 +0000 UTC m=+0.058983593 container kill af7b7aff59077d3fa527bb3f8693157275dc24e3f307120f0bc410421574e60d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true) Nov 23 04:59:58 localhost dnsmasq-dhcp[313498]: read /var/lib/neutron/dhcp/05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2/opts Nov 23 04:59:58 localhost nova_compute[280939]: 2025-11-23 09:59:58.388 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:58 localhost ovn_controller[153771]: 2025-11-23T09:59:58Z|00151|binding|INFO|Releasing lport bbb0c303-03ab-4253-908f-047baa611770 from this chassis (sb_readonly=0) Nov 23 04:59:58 localhost ovn_controller[153771]: 2025-11-23T09:59:58Z|00152|binding|INFO|Setting lport bbb0c303-03ab-4253-908f-047baa611770 down in Southbound Nov 23 04:59:58 localhost kernel: device tapbbb0c303-03 left promiscuous mode Nov 23 04:59:58 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:58.397 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f2e2f7f2c0054b9bb54cb70bfb2267e5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba523834-7b93-4800-9f1e-51517bdea478, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=bbb0c303-03ab-4253-908f-047baa611770) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 04:59:58 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:58.399 159415 INFO neutron.agent.ovn.metadata.agent [-] Port bbb0c303-03ab-4253-908f-047baa611770 in datapath 05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2 unbound from our chassis#033[00m Nov 23 04:59:58 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:58.401 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 04:59:58 localhost ovn_metadata_agent[159410]: 2025-11-23 09:59:58.402 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[5316cbf6-0494-4088-9d7d-55e7d27dab4f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 04:59:58 localhost nova_compute[280939]: 2025-11-23 09:59:58.415 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 04:59:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 1.3 MiB/s rd, 1.2 KiB/s wr, 70 op/s Nov 23 04:59:59 localhost nova_compute[280939]: 2025-11-23 09:59:59.975 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:00 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : overall HEALTH_OK Nov 23 05:00:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:00.134 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:59Z, description=, device_id=f7264c3d-d91d-4440-8d2f-8f70b552fb05, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4f868f2b-0c20-4763-a0d6-14e5421895b6, ip_allocation=immediate, mac_address=fa:16:3e:c2:e2:4c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:59:50Z, description=, dns_domain=, id=033d8290-1897-4498-84d2-be9ceedec80f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsTestJSON-230601146-network, port_security_enabled=True, project_id=0ffd5730cfc54429a6af666c4ae63fe7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=39160, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1132, status=ACTIVE, subnets=['7b9332d4-911f-4013-b931-ded78c84f8fc'], tags=[], tenant_id=0ffd5730cfc54429a6af666c4ae63fe7, updated_at=2025-11-23T09:59:52Z, vlan_transparent=None, network_id=033d8290-1897-4498-84d2-be9ceedec80f, port_security_enabled=False, project_id=0ffd5730cfc54429a6af666c4ae63fe7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1179, status=DOWN, tags=[], tenant_id=0ffd5730cfc54429a6af666c4ae63fe7, updated_at=2025-11-23T09:59:59Z on network 033d8290-1897-4498-84d2-be9ceedec80f#033[00m Nov 23 05:00:00 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:00.272 2 INFO neutron.agent.securitygroups_rpc [None req-07e0e453-973e-4bb8-a740-58232b03adf2 e78ebdfe612745638abad47217c77d70 a40d996843764f32a4281f01703f5aee - - default default] Security group member updated ['e81e3952-d0ad-411e-a904-c021d2ed129c']#033[00m Nov 23 05:00:00 localhost dnsmasq[313803]: read /var/lib/neutron/dhcp/033d8290-1897-4498-84d2-be9ceedec80f/addn_hosts - 1 addresses Nov 23 05:00:00 localhost podman[313859]: 2025-11-23 10:00:00.417245177 +0000 UTC m=+0.053098282 container kill 9a392660df1f1a058eeed902a30ca021e4ce97640174ab27bfd9819bcec9269d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-033d8290-1897-4498-84d2-be9ceedec80f, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:00:00 localhost dnsmasq-dhcp[313803]: read /var/lib/neutron/dhcp/033d8290-1897-4498-84d2-be9ceedec80f/host Nov 23 05:00:00 localhost dnsmasq-dhcp[313803]: read /var/lib/neutron/dhcp/033d8290-1897-4498-84d2-be9ceedec80f/opts Nov 23 05:00:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:00:00 localhost ceph-mon[293353]: overall HEALTH_OK Nov 23 05:00:00 localhost podman[313875]: 2025-11-23 10:00:00.53586035 +0000 UTC m=+0.082645870 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, release=1755695350, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter, architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, maintainer=Red Hat, Inc.) Nov 23 05:00:00 localhost podman[313875]: 2025-11-23 10:00:00.577452598 +0000 UTC m=+0.124238138 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, release=1755695350, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container) Nov 23 05:00:00 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:00:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:00.651 262301 INFO neutron.agent.dhcp.agent [None req-fdd400b9-795f-4054-bbee-e7811e80b817 - - - - - -] DHCP configuration for ports {'4f868f2b-0c20-4763-a0d6-14e5421895b6'} is completed#033[00m Nov 23 05:00:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v167: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:01 localhost nova_compute[280939]: 2025-11-23 10:00:01.405 280943 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 23 05:00:01 localhost nova_compute[280939]: 2025-11-23 10:00:01.406 280943 INFO nova.compute.manager [-] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] VM Stopped (Lifecycle Event)#033[00m Nov 23 05:00:01 localhost nova_compute[280939]: 2025-11-23 10:00:01.438 280943 DEBUG nova.compute.manager [None req-fac0ff2e-5a5a-48d3-a181-8b92bcfce410 - - - - - -] [instance: 8a8ddd35-7dc7-40a2-9524-7a0b8fec8ef7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 23 05:00:01 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:01.664 2 INFO neutron.agent.securitygroups_rpc [None req-68228748-d1d1-4e93-958e-faf2dd4d659b 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:00:01 localhost nova_compute[280939]: 2025-11-23 10:00:01.726 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:02 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:02.102 2 INFO neutron.agent.securitygroups_rpc [None req-ba51cecc-c6ac-46c9-99a7-bae53245da97 e78ebdfe612745638abad47217c77d70 a40d996843764f32a4281f01703f5aee - - default default] Security group member updated ['e81e3952-d0ad-411e-a904-c021d2ed129c']#033[00m Nov 23 05:00:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:00:02 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:02.739 2 INFO neutron.agent.securitygroups_rpc [None req-be23f1ad-34a8-40d2-b634-8fb333cbd4a1 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:00:03 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:03.233 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:59Z, description=, device_id=f7264c3d-d91d-4440-8d2f-8f70b552fb05, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4f868f2b-0c20-4763-a0d6-14e5421895b6, ip_allocation=immediate, mac_address=fa:16:3e:c2:e2:4c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:59:50Z, description=, dns_domain=, id=033d8290-1897-4498-84d2-be9ceedec80f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsTestJSON-230601146-network, port_security_enabled=True, project_id=0ffd5730cfc54429a6af666c4ae63fe7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=39160, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1132, status=ACTIVE, subnets=['7b9332d4-911f-4013-b931-ded78c84f8fc'], tags=[], tenant_id=0ffd5730cfc54429a6af666c4ae63fe7, updated_at=2025-11-23T09:59:52Z, vlan_transparent=None, network_id=033d8290-1897-4498-84d2-be9ceedec80f, port_security_enabled=False, project_id=0ffd5730cfc54429a6af666c4ae63fe7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1179, status=DOWN, tags=[], tenant_id=0ffd5730cfc54429a6af666c4ae63fe7, updated_at=2025-11-23T09:59:59Z on network 033d8290-1897-4498-84d2-be9ceedec80f#033[00m Nov 23 05:00:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v168: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:03 localhost dnsmasq[313803]: read /var/lib/neutron/dhcp/033d8290-1897-4498-84d2-be9ceedec80f/addn_hosts - 1 addresses Nov 23 05:00:03 localhost dnsmasq-dhcp[313803]: read /var/lib/neutron/dhcp/033d8290-1897-4498-84d2-be9ceedec80f/host Nov 23 05:00:03 localhost dnsmasq-dhcp[313803]: read /var/lib/neutron/dhcp/033d8290-1897-4498-84d2-be9ceedec80f/opts Nov 23 05:00:03 localhost podman[313918]: 2025-11-23 10:00:03.455096835 +0000 UTC m=+0.060470698 container kill 9a392660df1f1a058eeed902a30ca021e4ce97640174ab27bfd9819bcec9269d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-033d8290-1897-4498-84d2-be9ceedec80f, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Nov 23 05:00:03 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:03.727 262301 INFO neutron.agent.dhcp.agent [None req-d8936c07-d18c-474d-9cf4-9af6ac7b9f54 - - - - - -] DHCP configuration for ports {'4f868f2b-0c20-4763-a0d6-14e5421895b6'} is completed#033[00m Nov 23 05:00:04 localhost dnsmasq[313498]: exiting on receipt of SIGTERM Nov 23 05:00:04 localhost systemd[1]: libpod-af7b7aff59077d3fa527bb3f8693157275dc24e3f307120f0bc410421574e60d.scope: Deactivated successfully. Nov 23 05:00:04 localhost podman[313955]: 2025-11-23 10:00:04.270197341 +0000 UTC m=+0.060329614 container kill af7b7aff59077d3fa527bb3f8693157275dc24e3f307120f0bc410421574e60d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Nov 23 05:00:04 localhost podman[313969]: 2025-11-23 10:00:04.33654552 +0000 UTC m=+0.057080805 container died af7b7aff59077d3fa527bb3f8693157275dc24e3f307120f0bc410421574e60d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Nov 23 05:00:04 localhost systemd[1]: tmp-crun.0dRQB7.mount: Deactivated successfully. Nov 23 05:00:04 localhost podman[313969]: 2025-11-23 10:00:04.388594368 +0000 UTC m=+0.109129603 container cleanup af7b7aff59077d3fa527bb3f8693157275dc24e3f307120f0bc410421574e60d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2) Nov 23 05:00:04 localhost systemd[1]: libpod-conmon-af7b7aff59077d3fa527bb3f8693157275dc24e3f307120f0bc410421574e60d.scope: Deactivated successfully. Nov 23 05:00:04 localhost podman[313974]: 2025-11-23 10:00:04.413154562 +0000 UTC m=+0.123555756 container remove af7b7aff59077d3fa527bb3f8693157275dc24e3f307120f0bc410421574e60d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05ebc5e9-e2c4-4c73-bbdf-03f89c2452c2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 05:00:04 localhost systemd[1]: var-lib-containers-storage-overlay-cb367043772ff4083c54a170564eeb7eb0c38258a01084fdf12325548e8c2ee0-merged.mount: Deactivated successfully. Nov 23 05:00:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af7b7aff59077d3fa527bb3f8693157275dc24e3f307120f0bc410421574e60d-userdata-shm.mount: Deactivated successfully. Nov 23 05:00:04 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:04.574 262301 INFO neutron.agent.dhcp.agent [None req-448d7739-e664-4adb-944f-3d3b79bc43da - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:00:04 localhost systemd[1]: run-netns-qdhcp\x2d05ebc5e9\x2de2c4\x2d4c73\x2dbbdf\x2d03f89c2452c2.mount: Deactivated successfully. Nov 23 05:00:04 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:04.575 262301 INFO neutron.agent.dhcp.agent [None req-448d7739-e664-4adb-944f-3d3b79bc43da - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:00:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:00:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:00:04 localhost podman[314000]: 2025-11-23 10:00:04.88546727 +0000 UTC m=+0.075364636 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 23 05:00:04 localhost podman[314000]: 2025-11-23 10:00:04.900353897 +0000 UTC m=+0.090251233 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0) Nov 23 05:00:04 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:00:04 localhost podman[314001]: 2025-11-23 10:00:04.942346807 +0000 UTC m=+0.129253091 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:00:04 localhost podman[314001]: 2025-11-23 10:00:04.952040144 +0000 UTC m=+0.138946478 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 05:00:04 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:04.956 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:00:04 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:00:05 localhost nova_compute[280939]: 2025-11-23 10:00:05.013 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:05 localhost nova_compute[280939]: 2025-11-23 10:00:05.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:00:05 localhost nova_compute[280939]: 2025-11-23 10:00:05.209 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v169: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:06 localhost openstack_network_exporter[241732]: ERROR 10:00:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:00:06 localhost openstack_network_exporter[241732]: ERROR 10:00:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:00:06 localhost openstack_network_exporter[241732]: ERROR 10:00:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:00:06 localhost openstack_network_exporter[241732]: ERROR 10:00:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:00:06 localhost openstack_network_exporter[241732]: Nov 23 05:00:06 localhost openstack_network_exporter[241732]: ERROR 10:00:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:00:06 localhost openstack_network_exporter[241732]: Nov 23 05:00:06 localhost nova_compute[280939]: 2025-11-23 10:00:06.731 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:06 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:06.731 2 INFO neutron.agent.securitygroups_rpc [None req-43a1a170-ce3b-4775-9849-731eb3e4f92f 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:00:07 localhost ovn_controller[153771]: 2025-11-23T10:00:07Z|00153|ovn_bfd|INFO|Enabled BFD on interface ovn-c237ed-0 Nov 23 05:00:07 localhost ovn_controller[153771]: 2025-11-23T10:00:07Z|00154|ovn_bfd|INFO|Enabled BFD on interface ovn-49b8a0-0 Nov 23 05:00:07 localhost ovn_controller[153771]: 2025-11-23T10:00:07Z|00155|ovn_bfd|INFO|Enabled BFD on interface ovn-b7d5b3-0 Nov 23 05:00:07 localhost nova_compute[280939]: 2025-11-23 10:00:07.198 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:07 localhost nova_compute[280939]: 2025-11-23 10:00:07.215 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:07 localhost nova_compute[280939]: 2025-11-23 10:00:07.221 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:07 localhost nova_compute[280939]: 2025-11-23 10:00:07.262 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:07 localhost nova_compute[280939]: 2025-11-23 10:00:07.280 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:07 localhost nova_compute[280939]: 2025-11-23 10:00:07.290 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v170: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:00:08 localhost nova_compute[280939]: 2025-11-23 10:00:08.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:00:08 localhost nova_compute[280939]: 2025-11-23 10:00:08.234 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:08 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:08.401 2 INFO neutron.agent.securitygroups_rpc [None req-c2842295-e0a9-464d-87c0-40db2e234814 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:00:08 localhost nova_compute[280939]: 2025-11-23 10:00:08.930 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:09 localhost nova_compute[280939]: 2025-11-23 10:00:09.031 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:09 localhost nova_compute[280939]: 2025-11-23 10:00:09.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:00:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v171: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:09 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:09.645 262301 INFO neutron.agent.linux.ip_lib [None req-c234c290-10f6-4e66-a022-2fdb89bc0e13 - - - - - -] Device tapf7fd4566-f8 cannot be used as it has no MAC address#033[00m Nov 23 05:00:09 localhost nova_compute[280939]: 2025-11-23 10:00:09.698 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:09 localhost kernel: device tapf7fd4566-f8 entered promiscuous mode Nov 23 05:00:09 localhost ovn_controller[153771]: 2025-11-23T10:00:09Z|00156|binding|INFO|Claiming lport f7fd4566-f8dc-4ab8-9360-2afb4e2332b7 for this chassis. Nov 23 05:00:09 localhost ovn_controller[153771]: 2025-11-23T10:00:09Z|00157|binding|INFO|f7fd4566-f8dc-4ab8-9360-2afb4e2332b7: Claiming unknown Nov 23 05:00:09 localhost NetworkManager[5966]: [1763892009.7051] manager: (tapf7fd4566-f8): new Generic device (/org/freedesktop/NetworkManager/Devices/29) Nov 23 05:00:09 localhost nova_compute[280939]: 2025-11-23 10:00:09.704 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:09 localhost systemd-udevd[314052]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:00:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:09.717 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-63ace423-7c2d-4197-9bcb-40fc875ebd4e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-63ace423-7c2d-4197-9bcb-40fc875ebd4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79509bc833494f3598e01347dc55dea9', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a57d1715-6928-44f1-92a4-6f425d872e90, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f7fd4566-f8dc-4ab8-9360-2afb4e2332b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:00:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:09.720 159415 INFO neutron.agent.ovn.metadata.agent [-] Port f7fd4566-f8dc-4ab8-9360-2afb4e2332b7 in datapath 63ace423-7c2d-4197-9bcb-40fc875ebd4e bound to our chassis#033[00m Nov 23 05:00:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:09.722 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 63ace423-7c2d-4197-9bcb-40fc875ebd4e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:00:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:09.723 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa73414-49e0-494a-a7a6-116d2b67e906]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:00:09 localhost journal[229336]: ethtool ioctl error on tapf7fd4566-f8: No such device Nov 23 05:00:09 localhost journal[229336]: ethtool ioctl error on tapf7fd4566-f8: No such device Nov 23 05:00:09 localhost ovn_controller[153771]: 2025-11-23T10:00:09Z|00158|binding|INFO|Setting lport f7fd4566-f8dc-4ab8-9360-2afb4e2332b7 ovn-installed in OVS Nov 23 05:00:09 localhost ovn_controller[153771]: 2025-11-23T10:00:09Z|00159|binding|INFO|Setting lport f7fd4566-f8dc-4ab8-9360-2afb4e2332b7 up in Southbound Nov 23 05:00:09 localhost nova_compute[280939]: 2025-11-23 10:00:09.741 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:09.742 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:00:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:09.743 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:00:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:09.743 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:00:09 localhost journal[229336]: ethtool ioctl error on tapf7fd4566-f8: No such device Nov 23 05:00:09 localhost journal[229336]: ethtool ioctl error on tapf7fd4566-f8: No such device Nov 23 05:00:09 localhost journal[229336]: ethtool ioctl error on tapf7fd4566-f8: No such device Nov 23 05:00:09 localhost journal[229336]: ethtool ioctl error on tapf7fd4566-f8: No such device Nov 23 05:00:09 localhost journal[229336]: ethtool ioctl error on tapf7fd4566-f8: No such device Nov 23 05:00:09 localhost journal[229336]: ethtool ioctl error on tapf7fd4566-f8: No such device Nov 23 05:00:09 localhost nova_compute[280939]: 2025-11-23 10:00:09.778 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:09 localhost nova_compute[280939]: 2025-11-23 10:00:09.804 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:10 localhost nova_compute[280939]: 2025-11-23 10:00:10.014 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:10 localhost nova_compute[280939]: 2025-11-23 10:00:10.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:00:10 localhost nova_compute[280939]: 2025-11-23 10:00:10.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 05:00:10 localhost nova_compute[280939]: 2025-11-23 10:00:10.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 05:00:10 localhost nova_compute[280939]: 2025-11-23 10:00:10.149 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 05:00:11 localhost podman[314123]: Nov 23 05:00:11 localhost podman[314123]: 2025-11-23 10:00:11.105305253 +0000 UTC m=+0.078693168 container create 544e69546e9b78beb8e62b955d4dcadba8ee1f4ef3d658d59ce5684f2c1e2e46 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63ace423-7c2d-4197-9bcb-40fc875ebd4e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 23 05:00:11 localhost nova_compute[280939]: 2025-11-23 10:00:11.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:00:11 localhost nova_compute[280939]: 2025-11-23 10:00:11.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:00:11 localhost nova_compute[280939]: 2025-11-23 10:00:11.134 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 05:00:11 localhost systemd[1]: Started libpod-conmon-544e69546e9b78beb8e62b955d4dcadba8ee1f4ef3d658d59ce5684f2c1e2e46.scope. Nov 23 05:00:11 localhost systemd[1]: tmp-crun.H4kSaf.mount: Deactivated successfully. Nov 23 05:00:11 localhost systemd[1]: Started libcrun container. Nov 23 05:00:11 localhost podman[314123]: 2025-11-23 10:00:11.070231955 +0000 UTC m=+0.043619900 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:00:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e288d634b1857ee903b6ec1c482164368e06a2ff848167f761bcc67e846d5154/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:00:11 localhost podman[314123]: 2025-11-23 10:00:11.185523026 +0000 UTC m=+0.158910981 container init 544e69546e9b78beb8e62b955d4dcadba8ee1f4ef3d658d59ce5684f2c1e2e46 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63ace423-7c2d-4197-9bcb-40fc875ebd4e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 05:00:11 localhost podman[314123]: 2025-11-23 10:00:11.194195633 +0000 UTC m=+0.167583558 container start 544e69546e9b78beb8e62b955d4dcadba8ee1f4ef3d658d59ce5684f2c1e2e46 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63ace423-7c2d-4197-9bcb-40fc875ebd4e, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 05:00:11 localhost dnsmasq[314141]: started, version 2.85 cachesize 150 Nov 23 05:00:11 localhost dnsmasq[314141]: DNS service limited to local subnets Nov 23 05:00:11 localhost dnsmasq[314141]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:00:11 localhost dnsmasq[314141]: warning: no upstream servers configured Nov 23 05:00:11 localhost dnsmasq-dhcp[314141]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:00:11 localhost dnsmasq[314141]: read /var/lib/neutron/dhcp/63ace423-7c2d-4197-9bcb-40fc875ebd4e/addn_hosts - 0 addresses Nov 23 05:00:11 localhost dnsmasq-dhcp[314141]: read /var/lib/neutron/dhcp/63ace423-7c2d-4197-9bcb-40fc875ebd4e/host Nov 23 05:00:11 localhost dnsmasq-dhcp[314141]: read /var/lib/neutron/dhcp/63ace423-7c2d-4197-9bcb-40fc875ebd4e/opts Nov 23 05:00:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v172: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:11 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:11.430 262301 INFO neutron.agent.dhcp.agent [None req-2f825541-e4fb-4623-8afb-3dd3c8d6f283 - - - - - -] DHCP configuration for ports {'7408e187-79e0-4fc8-9796-c7957126f6d0'} is completed#033[00m Nov 23 05:00:11 localhost nova_compute[280939]: 2025-11-23 10:00:11.732 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:12 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:12.049 2 INFO neutron.agent.securitygroups_rpc [None req-311a0dd8-e6e7-491c-ad02-2876db83aabe 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:00:12 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 05:00:12 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 3248 writes, 26K keys, 3248 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.08 MB/s#012Cumulative WAL: 3248 writes, 3248 syncs, 1.00 writes per sync, written: 0.05 GB, 0.08 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3248 writes, 26K keys, 3248 commit groups, 1.0 writes per commit group, ingest: 48.14 MB, 0.08 MB/s#012Interval WAL: 3248 writes, 3248 syncs, 1.00 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 190.6 0.18 0.08 12 0.015 0 0 0.0 0.0#012 L6 1/0 16.10 MB 0.0 0.2 0.0 0.2 0.2 0.0 0.0 5.2 226.9 206.1 0.87 0.47 11 0.079 130K 5553 0.0 0.0#012 Sum 1/0 16.10 MB 0.0 0.2 0.0 0.2 0.2 0.0 0.0 6.2 188.1 203.4 1.05 0.56 23 0.046 130K 5553 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.2 0.0 0.2 0.2 0.0 0.0 6.2 188.6 204.0 1.05 0.56 22 0.048 130K 5553 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.2 0.0 0.2 0.2 0.0 0.0 0.0 226.9 206.1 0.87 0.47 11 0.079 130K 5553 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 193.9 0.18 0.08 11 0.016 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.033, interval 0.033#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.21 GB write, 0.36 MB/s write, 0.19 GB read, 0.33 MB/s read, 1.1 seconds#012Interval compaction: 0.21 GB write, 0.36 MB/s write, 0.19 GB read, 0.33 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556054b3b350#2 capacity: 308.00 MB usage: 48.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000311 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3479,47.83 MB,15.5305%) FilterBlock(23,380.80 KB,0.120738%) IndexBlock(23,509.52 KB,0.16155%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Nov 23 05:00:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:00:13 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:13.010 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:12Z, description=, device_id=5d59896a-62ac-414f-8096-d64028fde7b1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=21511a53-b844-48f1-afe5-5554b5960c6f, ip_allocation=immediate, mac_address=fa:16:3e:73:e5:e8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:00:07Z, description=, dns_domain=, id=63ace423-7c2d-4197-9bcb-40fc875ebd4e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-117295423, port_security_enabled=True, project_id=79509bc833494f3598e01347dc55dea9, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=26364, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1205, status=ACTIVE, subnets=['5b8ff890-15fb-478f-a609-57e4cb451781'], tags=[], tenant_id=79509bc833494f3598e01347dc55dea9, updated_at=2025-11-23T10:00:08Z, vlan_transparent=None, network_id=63ace423-7c2d-4197-9bcb-40fc875ebd4e, port_security_enabled=False, project_id=79509bc833494f3598e01347dc55dea9, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1234, status=DOWN, tags=[], tenant_id=79509bc833494f3598e01347dc55dea9, updated_at=2025-11-23T10:00:12Z on network 63ace423-7c2d-4197-9bcb-40fc875ebd4e#033[00m Nov 23 05:00:13 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:13.104 2 INFO neutron.agent.securitygroups_rpc [None req-e5debdc8-a6b2-4239-b0b7-2d251ec66c55 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:00:13 localhost systemd[1]: tmp-crun.6l1CQS.mount: Deactivated successfully. Nov 23 05:00:13 localhost dnsmasq[314141]: read /var/lib/neutron/dhcp/63ace423-7c2d-4197-9bcb-40fc875ebd4e/addn_hosts - 1 addresses Nov 23 05:00:13 localhost dnsmasq-dhcp[314141]: read /var/lib/neutron/dhcp/63ace423-7c2d-4197-9bcb-40fc875ebd4e/host Nov 23 05:00:13 localhost podman[314157]: 2025-11-23 10:00:13.256732154 +0000 UTC m=+0.069790285 container kill 544e69546e9b78beb8e62b955d4dcadba8ee1f4ef3d658d59ce5684f2c1e2e46 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63ace423-7c2d-4197-9bcb-40fc875ebd4e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 23 05:00:13 localhost dnsmasq-dhcp[314141]: read /var/lib/neutron/dhcp/63ace423-7c2d-4197-9bcb-40fc875ebd4e/opts Nov 23 05:00:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v173: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:13 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:13.499 262301 INFO neutron.agent.dhcp.agent [None req-e0ad7d8c-c0a0-4acd-9d50-4c97105f5218 - - - - - -] DHCP configuration for ports {'21511a53-b844-48f1-afe5-5554b5960c6f'} is completed#033[00m Nov 23 05:00:14 localhost nova_compute[280939]: 2025-11-23 10:00:14.129 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:00:15 localhost nova_compute[280939]: 2025-11-23 10:00:15.015 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:15 localhost nova_compute[280939]: 2025-11-23 10:00:15.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:00:15 localhost nova_compute[280939]: 2025-11-23 10:00:15.152 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:00:15 localhost nova_compute[280939]: 2025-11-23 10:00:15.152 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:00:15 localhost nova_compute[280939]: 2025-11-23 10:00:15.152 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:00:15 localhost nova_compute[280939]: 2025-11-23 10:00:15.153 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 05:00:15 localhost nova_compute[280939]: 2025-11-23 10:00:15.153 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:00:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v174: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:15 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:00:15 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4139638832' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:00:15 localhost nova_compute[280939]: 2025-11-23 10:00:15.625 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:00:15 localhost nova_compute[280939]: 2025-11-23 10:00:15.847 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 05:00:15 localhost nova_compute[280939]: 2025-11-23 10:00:15.850 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11565MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 05:00:15 localhost nova_compute[280939]: 2025-11-23 10:00:15.850 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:00:15 localhost nova_compute[280939]: 2025-11-23 10:00:15.850 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:00:15 localhost nova_compute[280939]: 2025-11-23 10:00:15.923 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 05:00:15 localhost nova_compute[280939]: 2025-11-23 10:00:15.924 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 05:00:15 localhost nova_compute[280939]: 2025-11-23 10:00:15.942 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:00:16 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:16.209 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:12Z, description=, device_id=5d59896a-62ac-414f-8096-d64028fde7b1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=21511a53-b844-48f1-afe5-5554b5960c6f, ip_allocation=immediate, mac_address=fa:16:3e:73:e5:e8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:00:07Z, description=, dns_domain=, id=63ace423-7c2d-4197-9bcb-40fc875ebd4e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-117295423, port_security_enabled=True, project_id=79509bc833494f3598e01347dc55dea9, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=26364, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1205, status=ACTIVE, subnets=['5b8ff890-15fb-478f-a609-57e4cb451781'], tags=[], tenant_id=79509bc833494f3598e01347dc55dea9, updated_at=2025-11-23T10:00:08Z, vlan_transparent=None, network_id=63ace423-7c2d-4197-9bcb-40fc875ebd4e, port_security_enabled=False, project_id=79509bc833494f3598e01347dc55dea9, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1234, status=DOWN, tags=[], tenant_id=79509bc833494f3598e01347dc55dea9, updated_at=2025-11-23T10:00:12Z on network 63ace423-7c2d-4197-9bcb-40fc875ebd4e#033[00m Nov 23 05:00:16 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:00:16 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1141783378' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:00:16 localhost nova_compute[280939]: 2025-11-23 10:00:16.437 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:00:16 localhost nova_compute[280939]: 2025-11-23 10:00:16.444 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 05:00:16 localhost dnsmasq[314141]: read /var/lib/neutron/dhcp/63ace423-7c2d-4197-9bcb-40fc875ebd4e/addn_hosts - 1 addresses Nov 23 05:00:16 localhost dnsmasq-dhcp[314141]: read /var/lib/neutron/dhcp/63ace423-7c2d-4197-9bcb-40fc875ebd4e/host Nov 23 05:00:16 localhost dnsmasq-dhcp[314141]: read /var/lib/neutron/dhcp/63ace423-7c2d-4197-9bcb-40fc875ebd4e/opts Nov 23 05:00:16 localhost podman[314237]: 2025-11-23 10:00:16.463439967 +0000 UTC m=+0.067774232 container kill 544e69546e9b78beb8e62b955d4dcadba8ee1f4ef3d658d59ce5684f2c1e2e46 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63ace423-7c2d-4197-9bcb-40fc875ebd4e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:00:16 localhost nova_compute[280939]: 2025-11-23 10:00:16.464 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 05:00:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:00:16 localhost nova_compute[280939]: 2025-11-23 10:00:16.494 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 05:00:16 localhost nova_compute[280939]: 2025-11-23 10:00:16.494 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:00:16 localhost systemd[1]: tmp-crun.HhSXhB.mount: Deactivated successfully. Nov 23 05:00:16 localhost podman[314253]: 2025-11-23 10:00:16.586262 +0000 UTC m=+0.095465163 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 05:00:16 localhost podman[314253]: 2025-11-23 10:00:16.590036707 +0000 UTC m=+0.099239850 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 23 05:00:16 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:00:16 localhost nova_compute[280939]: 2025-11-23 10:00:16.734 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:16 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:16.768 262301 INFO neutron.agent.dhcp.agent [None req-2e748672-8355-4b3a-83c7-7e90e0ed1136 - - - - - -] DHCP configuration for ports {'21511a53-b844-48f1-afe5-5554b5960c6f'} is completed#033[00m Nov 23 05:00:17 localhost podman[239764]: time="2025-11-23T10:00:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:00:17 localhost podman[239764]: @ - - [23/Nov/2025:10:00:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158147 "" "Go-http-client/1.1" Nov 23 05:00:17 localhost podman[239764]: @ - - [23/Nov/2025:10:00:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19680 "" "Go-http-client/1.1" Nov 23 05:00:17 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:17.156 2 INFO neutron.agent.securitygroups_rpc [None req-eec5559f-af9d-40c0-b1ad-486b576202fe 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:00:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v175: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:17 localhost nova_compute[280939]: 2025-11-23 10:00:17.496 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:00:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:00:18 localhost dnsmasq[313803]: read /var/lib/neutron/dhcp/033d8290-1897-4498-84d2-be9ceedec80f/addn_hosts - 0 addresses Nov 23 05:00:18 localhost dnsmasq-dhcp[313803]: read /var/lib/neutron/dhcp/033d8290-1897-4498-84d2-be9ceedec80f/host Nov 23 05:00:18 localhost dnsmasq-dhcp[313803]: read /var/lib/neutron/dhcp/033d8290-1897-4498-84d2-be9ceedec80f/opts Nov 23 05:00:18 localhost systemd[1]: tmp-crun.5AxobX.mount: Deactivated successfully. Nov 23 05:00:18 localhost podman[314297]: 2025-11-23 10:00:18.074891754 +0000 UTC m=+0.060685025 container kill 9a392660df1f1a058eeed902a30ca021e4ce97640174ab27bfd9819bcec9269d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-033d8290-1897-4498-84d2-be9ceedec80f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:00:18 localhost ovn_controller[153771]: 2025-11-23T10:00:18Z|00160|ovn_bfd|INFO|Disabled BFD on interface ovn-c237ed-0 Nov 23 05:00:18 localhost ovn_controller[153771]: 2025-11-23T10:00:18Z|00161|ovn_bfd|INFO|Disabled BFD on interface ovn-49b8a0-0 Nov 23 05:00:18 localhost ovn_controller[153771]: 2025-11-23T10:00:18Z|00162|ovn_bfd|INFO|Disabled BFD on interface ovn-b7d5b3-0 Nov 23 05:00:18 localhost nova_compute[280939]: 2025-11-23 10:00:18.276 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:18 localhost nova_compute[280939]: 2025-11-23 10:00:18.279 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:18 localhost nova_compute[280939]: 2025-11-23 10:00:18.298 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:18 localhost nova_compute[280939]: 2025-11-23 10:00:18.875 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:18 localhost ovn_controller[153771]: 2025-11-23T10:00:18Z|00163|binding|INFO|Releasing lport cc39d86a-562a-496f-bb57-78315ac8f6ac from this chassis (sb_readonly=0) Nov 23 05:00:18 localhost ovn_controller[153771]: 2025-11-23T10:00:18Z|00164|binding|INFO|Setting lport cc39d86a-562a-496f-bb57-78315ac8f6ac down in Southbound Nov 23 05:00:18 localhost kernel: device tapcc39d86a-56 left promiscuous mode Nov 23 05:00:18 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:18.884 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-033d8290-1897-4498-84d2-be9ceedec80f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-033d8290-1897-4498-84d2-be9ceedec80f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ffd5730cfc54429a6af666c4ae63fe7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ab7703a-ee52-4da3-b8bf-3579b6ae6a65, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=cc39d86a-562a-496f-bb57-78315ac8f6ac) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:00:18 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:18.886 159415 INFO neutron.agent.ovn.metadata.agent [-] Port cc39d86a-562a-496f-bb57-78315ac8f6ac in datapath 033d8290-1897-4498-84d2-be9ceedec80f unbound from our chassis#033[00m Nov 23 05:00:18 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:18.888 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 033d8290-1897-4498-84d2-be9ceedec80f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:00:18 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:18.889 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[6c49e47e-14dc-421d-91c3-4e12dba17816]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:00:18 localhost nova_compute[280939]: 2025-11-23 10:00:18.897 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v176: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:20 localhost nova_compute[280939]: 2025-11-23 10:00:20.056 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:20 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:20.057 2 INFO neutron.agent.securitygroups_rpc [None req-0b580d26-bcb7-428b-9b48-acb40f408d24 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:00:21 localhost systemd[1]: tmp-crun.4cItBx.mount: Deactivated successfully. Nov 23 05:00:21 localhost dnsmasq[314141]: read /var/lib/neutron/dhcp/63ace423-7c2d-4197-9bcb-40fc875ebd4e/addn_hosts - 0 addresses Nov 23 05:00:21 localhost dnsmasq-dhcp[314141]: read /var/lib/neutron/dhcp/63ace423-7c2d-4197-9bcb-40fc875ebd4e/host Nov 23 05:00:21 localhost dnsmasq-dhcp[314141]: read /var/lib/neutron/dhcp/63ace423-7c2d-4197-9bcb-40fc875ebd4e/opts Nov 23 05:00:21 localhost podman[314335]: 2025-11-23 10:00:21.325336052 +0000 UTC m=+0.056512497 container kill 544e69546e9b78beb8e62b955d4dcadba8ee1f4ef3d658d59ce5684f2c1e2e46 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63ace423-7c2d-4197-9bcb-40fc875ebd4e, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2) Nov 23 05:00:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v177: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:21 localhost nova_compute[280939]: 2025-11-23 10:00:21.498 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:21 localhost ovn_controller[153771]: 2025-11-23T10:00:21Z|00165|binding|INFO|Releasing lport f7fd4566-f8dc-4ab8-9360-2afb4e2332b7 from this chassis (sb_readonly=0) Nov 23 05:00:21 localhost kernel: device tapf7fd4566-f8 left promiscuous mode Nov 23 05:00:21 localhost ovn_controller[153771]: 2025-11-23T10:00:21Z|00166|binding|INFO|Setting lport f7fd4566-f8dc-4ab8-9360-2afb4e2332b7 down in Southbound Nov 23 05:00:21 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:21.512 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-63ace423-7c2d-4197-9bcb-40fc875ebd4e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-63ace423-7c2d-4197-9bcb-40fc875ebd4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79509bc833494f3598e01347dc55dea9', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a57d1715-6928-44f1-92a4-6f425d872e90, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f7fd4566-f8dc-4ab8-9360-2afb4e2332b7) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:00:21 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:21.514 159415 INFO neutron.agent.ovn.metadata.agent [-] Port f7fd4566-f8dc-4ab8-9360-2afb4e2332b7 in datapath 63ace423-7c2d-4197-9bcb-40fc875ebd4e unbound from our chassis#033[00m Nov 23 05:00:21 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:21.516 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 63ace423-7c2d-4197-9bcb-40fc875ebd4e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:00:21 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:21.517 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[28188b17-4dfe-4e09-9b1b-d001ef6d2229]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:00:21 localhost nova_compute[280939]: 2025-11-23 10:00:21.525 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:21 localhost nova_compute[280939]: 2025-11-23 10:00:21.737 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:00:22 localhost podman[314374]: 2025-11-23 10:00:22.796608611 +0000 UTC m=+0.049494510 container kill 9a392660df1f1a058eeed902a30ca021e4ce97640174ab27bfd9819bcec9269d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-033d8290-1897-4498-84d2-be9ceedec80f, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 23 05:00:22 localhost dnsmasq[313803]: exiting on receipt of SIGTERM Nov 23 05:00:22 localhost systemd[1]: libpod-9a392660df1f1a058eeed902a30ca021e4ce97640174ab27bfd9819bcec9269d.scope: Deactivated successfully. Nov 23 05:00:22 localhost podman[314390]: 2025-11-23 10:00:22.862115564 +0000 UTC m=+0.048225193 container died 9a392660df1f1a058eeed902a30ca021e4ce97640174ab27bfd9819bcec9269d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-033d8290-1897-4498-84d2-be9ceedec80f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Nov 23 05:00:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9a392660df1f1a058eeed902a30ca021e4ce97640174ab27bfd9819bcec9269d-userdata-shm.mount: Deactivated successfully. Nov 23 05:00:22 localhost systemd[1]: var-lib-containers-storage-overlay-d371333f2bcd245727a7d34f26bd6f25857a5d4f1db991e69172ad50d6eb78e4-merged.mount: Deactivated successfully. Nov 23 05:00:22 localhost podman[314390]: 2025-11-23 10:00:22.903287408 +0000 UTC m=+0.089396967 container remove 9a392660df1f1a058eeed902a30ca021e4ce97640174ab27bfd9819bcec9269d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-033d8290-1897-4498-84d2-be9ceedec80f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:00:22 localhost systemd[1]: libpod-conmon-9a392660df1f1a058eeed902a30ca021e4ce97640174ab27bfd9819bcec9269d.scope: Deactivated successfully. Nov 23 05:00:22 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:22.952 262301 INFO neutron.agent.dhcp.agent [None req-b05d2fc2-f338-4171-8833-5c16c5d452db - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:00:22 localhost systemd[1]: run-netns-qdhcp\x2d033d8290\x2d1897\x2d4498\x2d84d2\x2dbe9ceedec80f.mount: Deactivated successfully. Nov 23 05:00:23 localhost dnsmasq[314141]: exiting on receipt of SIGTERM Nov 23 05:00:23 localhost podman[314432]: 2025-11-23 10:00:23.156136395 +0000 UTC m=+0.063572464 container kill 544e69546e9b78beb8e62b955d4dcadba8ee1f4ef3d658d59ce5684f2c1e2e46 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63ace423-7c2d-4197-9bcb-40fc875ebd4e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 05:00:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:00:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:00:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:00:23 localhost systemd[1]: libpod-544e69546e9b78beb8e62b955d4dcadba8ee1f4ef3d658d59ce5684f2c1e2e46.scope: Deactivated successfully. Nov 23 05:00:23 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:23.217 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:00:23 localhost podman[314454]: 2025-11-23 10:00:23.267324639 +0000 UTC m=+0.087147357 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:00:23 localhost podman[314454]: 2025-11-23 10:00:23.282471405 +0000 UTC m=+0.102294173 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:00:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_10:00:23 Nov 23 05:00:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 05:00:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 05:00:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['volumes', 'manila_data', 'manila_metadata', 'vms', 'images', 'backups', '.mgr'] Nov 23 05:00:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 05:00:23 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:00:23 localhost podman[314453]: 2025-11-23 10:00:23.308452153 +0000 UTC m=+0.129861229 container died 544e69546e9b78beb8e62b955d4dcadba8ee1f4ef3d658d59ce5684f2c1e2e46 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63ace423-7c2d-4197-9bcb-40fc875ebd4e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 05:00:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v178: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:00:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:00:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:00:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:00:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:00:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:00:23 localhost podman[314455]: 2025-11-23 10:00:23.371866041 +0000 UTC m=+0.186004305 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:00:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 05:00:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:00:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 05:00:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:00:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Nov 23 05:00:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:00:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 05:00:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:00:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 23 05:00:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:00:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 05:00:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:00:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 05:00:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:00:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Nov 23 05:00:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 05:00:23 localhost podman[314453]: 2025-11-23 10:00:23.417245995 +0000 UTC m=+0.238655021 container remove 544e69546e9b78beb8e62b955d4dcadba8ee1f4ef3d658d59ce5684f2c1e2e46 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63ace423-7c2d-4197-9bcb-40fc875ebd4e, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:00:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:00:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:00:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:00:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:00:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 05:00:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:00:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:00:23 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:23.441 262301 INFO neutron.agent.dhcp.agent [None req-b744679a-6064-4b88-b406-9b6c91d09e41 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:00:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:00:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:00:23 localhost podman[314455]: 2025-11-23 10:00:23.465574339 +0000 UTC m=+0.279712603 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:00:23 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:00:23 localhost podman[314463]: 2025-11-23 10:00:23.480316281 +0000 UTC m=+0.290181153 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 05:00:23 localhost systemd[1]: libpod-conmon-544e69546e9b78beb8e62b955d4dcadba8ee1f4ef3d658d59ce5684f2c1e2e46.scope: Deactivated successfully. Nov 23 05:00:23 localhost podman[314463]: 2025-11-23 10:00:23.522336822 +0000 UTC m=+0.332201734 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:00:23 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:00:23 localhost systemd[1]: var-lib-containers-storage-overlay-e288d634b1857ee903b6ec1c482164368e06a2ff848167f761bcc67e846d5154-merged.mount: Deactivated successfully. Nov 23 05:00:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-544e69546e9b78beb8e62b955d4dcadba8ee1f4ef3d658d59ce5684f2c1e2e46-userdata-shm.mount: Deactivated successfully. Nov 23 05:00:23 localhost systemd[1]: run-netns-qdhcp\x2d63ace423\x2d7c2d\x2d4197\x2d9bcb\x2d40fc875ebd4e.mount: Deactivated successfully. Nov 23 05:00:23 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:23.829 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:00:23 localhost nova_compute[280939]: 2025-11-23 10:00:23.914 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:25 localhost nova_compute[280939]: 2025-11-23 10:00:25.091 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v179: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:26 localhost nova_compute[280939]: 2025-11-23 10:00:26.741 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v180: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:00:28 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:28.995 2 INFO neutron.agent.securitygroups_rpc [None req-7e066fee-86c7-4254-86ea-1e6408304d23 e02e3a6650d04cc49a5785719f724cce cb02ad4f92a44de895fb1e96459d4a8d - - default default] Security group member updated ['19c917c4-9016-402d-a3b3-6d8ae59f7b74']#033[00m Nov 23 05:00:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v181: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:29 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:29.796 2 INFO neutron.agent.securitygroups_rpc [None req-28d507eb-defd-436d-9561-198db6b19aa5 e02e3a6650d04cc49a5785719f724cce cb02ad4f92a44de895fb1e96459d4a8d - - default default] Security group member updated ['19c917c4-9016-402d-a3b3-6d8ae59f7b74']#033[00m Nov 23 05:00:30 localhost nova_compute[280939]: 2025-11-23 10:00:30.137 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:00:30 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:30.863 2 INFO neutron.agent.securitygroups_rpc [None req-d4d902e4-381c-4584-8a85-837df381e7eb e02e3a6650d04cc49a5785719f724cce cb02ad4f92a44de895fb1e96459d4a8d - - default default] Security group member updated ['19c917c4-9016-402d-a3b3-6d8ae59f7b74']#033[00m Nov 23 05:00:30 localhost podman[314540]: 2025-11-23 10:00:30.891366553 +0000 UTC m=+0.080313498 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, release=1755695350, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.openshift.expose-services=, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_id=edpm, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc.) Nov 23 05:00:30 localhost podman[314540]: 2025-11-23 10:00:30.929407801 +0000 UTC m=+0.118354716 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, maintainer=Red Hat, Inc., vcs-type=git) Nov 23 05:00:30 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:00:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v182: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:31 localhost nova_compute[280939]: 2025-11-23 10:00:31.742 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:00:33 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:33.115 2 INFO neutron.agent.securitygroups_rpc [None req-ddca7662-4ee1-4886-8e69-87187e050157 e02e3a6650d04cc49a5785719f724cce cb02ad4f92a44de895fb1e96459d4a8d - - default default] Security group member updated ['19c917c4-9016-402d-a3b3-6d8ae59f7b74']#033[00m Nov 23 05:00:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v183: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:35 localhost nova_compute[280939]: 2025-11-23 10:00:35.173 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v184: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:00:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:00:35 localhost podman[314580]: 2025-11-23 10:00:35.866656629 +0000 UTC m=+0.083750453 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 05:00:35 localhost podman[314580]: 2025-11-23 10:00:35.905443451 +0000 UTC m=+0.122537305 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:00:35 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:00:35 localhost podman[314579]: 2025-11-23 10:00:35.928038745 +0000 UTC m=+0.146928325 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible) Nov 23 05:00:35 localhost podman[314579]: 2025-11-23 10:00:35.962368029 +0000 UTC m=+0.181257579 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:00:35 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:00:36 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:36.049 262301 INFO neutron.agent.linux.ip_lib [None req-89c479ee-23e6-4348-90f2-97d1ec56af26 - - - - - -] Device tapc998f5f4-5a cannot be used as it has no MAC address#033[00m Nov 23 05:00:36 localhost nova_compute[280939]: 2025-11-23 10:00:36.071 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:36 localhost kernel: device tapc998f5f4-5a entered promiscuous mode Nov 23 05:00:36 localhost NetworkManager[5966]: [1763892036.0816] manager: (tapc998f5f4-5a): new Generic device (/org/freedesktop/NetworkManager/Devices/30) Nov 23 05:00:36 localhost ovn_controller[153771]: 2025-11-23T10:00:36Z|00167|binding|INFO|Claiming lport c998f5f4-5ad9-4b85-914f-097dab6b3c9d for this chassis. Nov 23 05:00:36 localhost nova_compute[280939]: 2025-11-23 10:00:36.081 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:36 localhost ovn_controller[153771]: 2025-11-23T10:00:36Z|00168|binding|INFO|c998f5f4-5ad9-4b85-914f-097dab6b3c9d: Claiming unknown Nov 23 05:00:36 localhost systemd-udevd[314648]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:00:36 localhost journal[229336]: ethtool ioctl error on tapc998f5f4-5a: No such device Nov 23 05:00:36 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:36.112 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-6f8a68b2-9a2d-4a6e-af88-0c91d081d00b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f8a68b2-9a2d-4a6e-af88-0c91d081d00b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0f8848490fb54a5cb41f1607121a115c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=baa61e63-a759-4aa3-8605-4bc7bccad076, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c998f5f4-5ad9-4b85-914f-097dab6b3c9d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:00:36 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:36.113 159415 INFO neutron.agent.ovn.metadata.agent [-] Port c998f5f4-5ad9-4b85-914f-097dab6b3c9d in datapath 6f8a68b2-9a2d-4a6e-af88-0c91d081d00b bound to our chassis#033[00m Nov 23 05:00:36 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:36.113 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6f8a68b2-9a2d-4a6e-af88-0c91d081d00b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:00:36 localhost nova_compute[280939]: 2025-11-23 10:00:36.114 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:36 localhost journal[229336]: ethtool ioctl error on tapc998f5f4-5a: No such device Nov 23 05:00:36 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:36.115 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[b78181cd-4e01-4e87-ad7f-7bd92e2c73a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:00:36 localhost ovn_controller[153771]: 2025-11-23T10:00:36Z|00169|binding|INFO|Setting lport c998f5f4-5ad9-4b85-914f-097dab6b3c9d ovn-installed in OVS Nov 23 05:00:36 localhost ovn_controller[153771]: 2025-11-23T10:00:36Z|00170|binding|INFO|Setting lport c998f5f4-5ad9-4b85-914f-097dab6b3c9d up in Southbound Nov 23 05:00:36 localhost nova_compute[280939]: 2025-11-23 10:00:36.120 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:36 localhost journal[229336]: ethtool ioctl error on tapc998f5f4-5a: No such device Nov 23 05:00:36 localhost journal[229336]: ethtool ioctl error on tapc998f5f4-5a: No such device Nov 23 05:00:36 localhost journal[229336]: ethtool ioctl error on tapc998f5f4-5a: No such device Nov 23 05:00:36 localhost journal[229336]: ethtool ioctl error on tapc998f5f4-5a: No such device Nov 23 05:00:36 localhost journal[229336]: ethtool ioctl error on tapc998f5f4-5a: No such device Nov 23 05:00:36 localhost nova_compute[280939]: 2025-11-23 10:00:36.147 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:36 localhost journal[229336]: ethtool ioctl error on tapc998f5f4-5a: No such device Nov 23 05:00:36 localhost nova_compute[280939]: 2025-11-23 10:00:36.179 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:36 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 05:00:36 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 05:00:36 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 05:00:36 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:00:36 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 05:00:36 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:00:36 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev 5e6bb1d7-1b88-413f-b896-08f9dabc9102 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:00:36 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev 5e6bb1d7-1b88-413f-b896-08f9dabc9102 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:00:36 localhost ceph-mgr[286671]: [progress INFO root] Completed event 5e6bb1d7-1b88-413f-b896-08f9dabc9102 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 05:00:36 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 05:00:36 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 05:00:36 localhost openstack_network_exporter[241732]: ERROR 10:00:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:00:36 localhost openstack_network_exporter[241732]: ERROR 10:00:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:00:36 localhost openstack_network_exporter[241732]: ERROR 10:00:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:00:36 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:00:36 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:00:36 localhost openstack_network_exporter[241732]: ERROR 10:00:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:00:36 localhost openstack_network_exporter[241732]: Nov 23 05:00:36 localhost openstack_network_exporter[241732]: ERROR 10:00:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:00:36 localhost openstack_network_exporter[241732]: Nov 23 05:00:36 localhost nova_compute[280939]: 2025-11-23 10:00:36.744 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:37 localhost podman[314771]: Nov 23 05:00:37 localhost podman[314771]: 2025-11-23 10:00:37.159359424 +0000 UTC m=+0.096600997 container create 9c96a2061360fc825839dfe85f70f7a6df82bcb8edacd89586ff5246e42a9ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:00:37 localhost systemd[1]: Started libpod-conmon-9c96a2061360fc825839dfe85f70f7a6df82bcb8edacd89586ff5246e42a9ccb.scope. Nov 23 05:00:37 localhost podman[314771]: 2025-11-23 10:00:37.110734611 +0000 UTC m=+0.047976224 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:00:37 localhost systemd[1]: Started libcrun container. Nov 23 05:00:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d03d6b5395b809abf402511d5cd06f0c2661d7c254922bc7328733d40449cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:00:37 localhost podman[314771]: 2025-11-23 10:00:37.241247899 +0000 UTC m=+0.178489472 container init 9c96a2061360fc825839dfe85f70f7a6df82bcb8edacd89586ff5246e42a9ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:00:37 localhost podman[314771]: 2025-11-23 10:00:37.249524864 +0000 UTC m=+0.186766437 container start 9c96a2061360fc825839dfe85f70f7a6df82bcb8edacd89586ff5246e42a9ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118) Nov 23 05:00:37 localhost dnsmasq[314790]: started, version 2.85 cachesize 150 Nov 23 05:00:37 localhost dnsmasq[314790]: DNS service limited to local subnets Nov 23 05:00:37 localhost dnsmasq[314790]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:00:37 localhost dnsmasq[314790]: warning: no upstream servers configured Nov 23 05:00:37 localhost dnsmasq-dhcp[314790]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:00:37 localhost dnsmasq[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/addn_hosts - 0 addresses Nov 23 05:00:37 localhost dnsmasq-dhcp[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/host Nov 23 05:00:37 localhost dnsmasq-dhcp[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/opts Nov 23 05:00:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v185: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:37.417 262301 INFO neutron.agent.dhcp.agent [None req-11f1337f-b68d-41d0-85c3-b32e5483b8a1 - - - - - -] DHCP configuration for ports {'47e09a3c-f1b7-4aa5-8177-ec84d27f0975'} is completed#033[00m Nov 23 05:00:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:00:38 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 05:00:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 05:00:38 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:00:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v186: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:39 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:00:40 localhost nova_compute[280939]: 2025-11-23 10:00:40.220 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v187: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:41 localhost nova_compute[280939]: 2025-11-23 10:00:41.750 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:00:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v188: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:44 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:44.539 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:00:44 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:44.540 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 05:00:44 localhost nova_compute[280939]: 2025-11-23 10:00:44.576 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:45 localhost nova_compute[280939]: 2025-11-23 10:00:45.222 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v189: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:46 localhost nova_compute[280939]: 2025-11-23 10:00:46.754 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:00:46 localhost podman[314791]: 2025-11-23 10:00:46.897955688 +0000 UTC m=+0.086223800 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:00:46 localhost podman[314791]: 2025-11-23 10:00:46.906443238 +0000 UTC m=+0.094711310 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Nov 23 05:00:46 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:00:47 localhost podman[239764]: time="2025-11-23T10:00:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:00:47 localhost podman[239764]: @ - - [23/Nov/2025:10:00:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156323 "" "Go-http-client/1.1" Nov 23 05:00:47 localhost podman[239764]: @ - - [23/Nov/2025:10:00:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19212 "" "Go-http-client/1.1" Nov 23 05:00:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v190: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:00:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v191: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:49 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:49.925 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:48Z, description=, device_id=090118ef-c1fa-4398-a366-d88c413b007d, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9d56fcc7-cb5a-4b37-9262-16ad663944cc, ip_allocation=immediate, mac_address=fa:16:3e:1d:eb:ff, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:00:32Z, description=, dns_domain=, id=6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-85489708, port_security_enabled=True, project_id=0f8848490fb54a5cb41f1607121a115c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43852, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1344, status=ACTIVE, subnets=['494b9fb1-69ce-495d-be13-0668427543fd'], tags=[], tenant_id=0f8848490fb54a5cb41f1607121a115c, updated_at=2025-11-23T10:00:34Z, vlan_transparent=None, network_id=6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, port_security_enabled=False, project_id=0f8848490fb54a5cb41f1607121a115c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1403, status=DOWN, tags=[], tenant_id=0f8848490fb54a5cb41f1607121a115c, updated_at=2025-11-23T10:00:49Z on network 6f8a68b2-9a2d-4a6e-af88-0c91d081d00b#033[00m Nov 23 05:00:50 localhost podman[314827]: 2025-11-23 10:00:50.137922163 +0000 UTC m=+0.059990223 container kill 9c96a2061360fc825839dfe85f70f7a6df82bcb8edacd89586ff5246e42a9ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS) Nov 23 05:00:50 localhost dnsmasq[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/addn_hosts - 1 addresses Nov 23 05:00:50 localhost dnsmasq-dhcp[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/host Nov 23 05:00:50 localhost dnsmasq-dhcp[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/opts Nov 23 05:00:50 localhost nova_compute[280939]: 2025-11-23 10:00:50.270 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:50 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:50.307 2 INFO neutron.agent.securitygroups_rpc [None req-76f35429-371d-4a0a-a261-a7949dd94068 e59892284e454ae28c30542a06194f67 7d06d32932c14944b00061256a49a5ca - - default default] Security group member updated ['3d66d90b-639c-4111-b259-a5454103aaa3']#033[00m Nov 23 05:00:50 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:50.379 262301 INFO neutron.agent.dhcp.agent [None req-36191d48-9f2d-4247-bb2e-7cbeace1bac9 - - - - - -] DHCP configuration for ports {'9d56fcc7-cb5a-4b37-9262-16ad663944cc'} is completed#033[00m Nov 23 05:00:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v192: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:51 localhost nova_compute[280939]: 2025-11-23 10:00:51.757 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:52 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:52.348 2 INFO neutron.agent.securitygroups_rpc [None req-aaa2c11d-b613-412d-8872-49d855ed78d3 a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m Nov 23 05:00:52 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:00:53 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:53.029 2 INFO neutron.agent.securitygroups_rpc [None req-6f916793-e69f-491a-861e-c8f5876d7582 a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m Nov 23 05:00:53 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:53.087 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2 2001:db8::f816:3eff:feec:8f43'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 2001:db8::f816:3eff:feec:8f43'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:00:53 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:53.089 159415 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m Nov 23 05:00:53 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:53.092 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:00:53 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:53.093 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[ee4ce87e-0589-4be8-bde6-22d089126a05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:00:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:00:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:00:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:00:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:00:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:00:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:00:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v193: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:53 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:53.542 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 05:00:53 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:53.687 2 INFO neutron.agent.securitygroups_rpc [None req-8312fbc9-ef14-45ec-8246-3e5b81a28890 e59892284e454ae28c30542a06194f67 7d06d32932c14944b00061256a49a5ca - - default default] Security group member updated ['3d66d90b-639c-4111-b259-a5454103aaa3']#033[00m Nov 23 05:00:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:00:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:00:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:00:53 localhost systemd[1]: tmp-crun.wCrG0D.mount: Deactivated successfully. Nov 23 05:00:53 localhost podman[314849]: 2025-11-23 10:00:53.908202617 +0000 UTC m=+0.096899887 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 23 05:00:53 localhost podman[314850]: 2025-11-23 10:00:53.950808426 +0000 UTC m=+0.135822273 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:00:53 localhost podman[314850]: 2025-11-23 10:00:53.963425934 +0000 UTC m=+0.148439741 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:00:54 localhost podman[314851]: 2025-11-23 10:00:53.998649265 +0000 UTC m=+0.179553166 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:00:54 localhost podman[314849]: 2025-11-23 10:00:54.021078274 +0000 UTC m=+0.209775554 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 05:00:54 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:00:54 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:00:54 localhost podman[314851]: 2025-11-23 10:00:54.07854863 +0000 UTC m=+0.259452561 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Nov 23 05:00:54 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:00:54 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:54.214 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:48Z, description=, device_id=090118ef-c1fa-4398-a366-d88c413b007d, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9d56fcc7-cb5a-4b37-9262-16ad663944cc, ip_allocation=immediate, mac_address=fa:16:3e:1d:eb:ff, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:00:32Z, description=, dns_domain=, id=6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-85489708, port_security_enabled=True, project_id=0f8848490fb54a5cb41f1607121a115c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43852, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1344, status=ACTIVE, subnets=['494b9fb1-69ce-495d-be13-0668427543fd'], tags=[], tenant_id=0f8848490fb54a5cb41f1607121a115c, updated_at=2025-11-23T10:00:34Z, vlan_transparent=None, network_id=6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, port_security_enabled=False, project_id=0f8848490fb54a5cb41f1607121a115c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1403, status=DOWN, tags=[], tenant_id=0f8848490fb54a5cb41f1607121a115c, updated_at=2025-11-23T10:00:49Z on network 6f8a68b2-9a2d-4a6e-af88-0c91d081d00b#033[00m Nov 23 05:00:54 localhost dnsmasq[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/addn_hosts - 1 addresses Nov 23 05:00:54 localhost dnsmasq-dhcp[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/host Nov 23 05:00:54 localhost podman[314932]: 2025-11-23 10:00:54.42367517 +0000 UTC m=+0.058611311 container kill 9c96a2061360fc825839dfe85f70f7a6df82bcb8edacd89586ff5246e42a9ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 05:00:54 localhost dnsmasq-dhcp[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/opts Nov 23 05:00:54 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:54.438 2 INFO neutron.agent.securitygroups_rpc [None req-0abb4626-8029-4616-a7f3-bc7ec334c676 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:00:54 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:54.711 262301 INFO neutron.agent.dhcp.agent [None req-5e74401f-d83e-4eb0-9bfb-ffc695572f00 - - - - - -] DHCP configuration for ports {'9d56fcc7-cb5a-4b37-9262-16ad663944cc'} is completed#033[00m Nov 23 05:00:55 localhost nova_compute[280939]: 2025-11-23 10:00:55.297 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v194: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:55 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:55.465 262301 INFO neutron.agent.linux.ip_lib [None req-ebf21a5a-b01f-45ca-8458-bc4c9c55e019 - - - - - -] Device tap5931a702-96 cannot be used as it has no MAC address#033[00m Nov 23 05:00:55 localhost nova_compute[280939]: 2025-11-23 10:00:55.489 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:55 localhost kernel: device tap5931a702-96 entered promiscuous mode Nov 23 05:00:55 localhost nova_compute[280939]: 2025-11-23 10:00:55.497 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:55 localhost ovn_controller[153771]: 2025-11-23T10:00:55Z|00171|binding|INFO|Claiming lport 5931a702-9606-4ddd-aa2b-e6c777bc15ed for this chassis. Nov 23 05:00:55 localhost ovn_controller[153771]: 2025-11-23T10:00:55Z|00172|binding|INFO|5931a702-9606-4ddd-aa2b-e6c777bc15ed: Claiming unknown Nov 23 05:00:55 localhost NetworkManager[5966]: [1763892055.4984] manager: (tap5931a702-96): new Generic device (/org/freedesktop/NetworkManager/Devices/31) Nov 23 05:00:55 localhost systemd-udevd[314963]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:00:55 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:55.509 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.102.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-e0393bef-db47-4423-a9c1-5ac7043e4ec3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0393bef-db47-4423-a9c1-5ac7043e4ec3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79509bc833494f3598e01347dc55dea9', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c508bc5-7545-4483-b6f3-b3aaeedc94db, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5931a702-9606-4ddd-aa2b-e6c777bc15ed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:00:55 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:55.511 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 5931a702-9606-4ddd-aa2b-e6c777bc15ed in datapath e0393bef-db47-4423-a9c1-5ac7043e4ec3 bound to our chassis#033[00m Nov 23 05:00:55 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:55.513 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e0393bef-db47-4423-a9c1-5ac7043e4ec3 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:00:55 localhost ovn_metadata_agent[159410]: 2025-11-23 10:00:55.517 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe2b305-87a0-4343-a822-805836c29d21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:00:55 localhost journal[229336]: ethtool ioctl error on tap5931a702-96: No such device Nov 23 05:00:55 localhost nova_compute[280939]: 2025-11-23 10:00:55.526 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:55 localhost journal[229336]: ethtool ioctl error on tap5931a702-96: No such device Nov 23 05:00:55 localhost ovn_controller[153771]: 2025-11-23T10:00:55Z|00173|binding|INFO|Setting lport 5931a702-9606-4ddd-aa2b-e6c777bc15ed ovn-installed in OVS Nov 23 05:00:55 localhost ovn_controller[153771]: 2025-11-23T10:00:55Z|00174|binding|INFO|Setting lport 5931a702-9606-4ddd-aa2b-e6c777bc15ed up in Southbound Nov 23 05:00:55 localhost nova_compute[280939]: 2025-11-23 10:00:55.533 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:55 localhost journal[229336]: ethtool ioctl error on tap5931a702-96: No such device Nov 23 05:00:55 localhost journal[229336]: ethtool ioctl error on tap5931a702-96: No such device Nov 23 05:00:55 localhost journal[229336]: ethtool ioctl error on tap5931a702-96: No such device Nov 23 05:00:55 localhost journal[229336]: ethtool ioctl error on tap5931a702-96: No such device Nov 23 05:00:55 localhost journal[229336]: ethtool ioctl error on tap5931a702-96: No such device Nov 23 05:00:55 localhost journal[229336]: ethtool ioctl error on tap5931a702-96: No such device Nov 23 05:00:55 localhost nova_compute[280939]: 2025-11-23 10:00:55.567 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:55 localhost nova_compute[280939]: 2025-11-23 10:00:55.594 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:55 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:55.783 2 INFO neutron.agent.securitygroups_rpc [None req-f375f660-8a27-403d-b89f-e04d1f758be2 2cfd21f178604be289d8bb16b3b9c18f 0f8848490fb54a5cb41f1607121a115c - - default default] Security group member updated ['c9d46e70-8b37-41f1-b62d-e1679c8d4c9c']#033[00m Nov 23 05:00:55 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:55.840 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:55Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=530a797e-2ea5-4dd8-91bc-b2f55e04dafb, ip_allocation=immediate, mac_address=fa:16:3e:43:79:b9, name=tempest-FloatingIPAdminTestJSON-1955256156, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:00:32Z, description=, dns_domain=, id=6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-85489708, port_security_enabled=True, project_id=0f8848490fb54a5cb41f1607121a115c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43852, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1344, status=ACTIVE, subnets=['494b9fb1-69ce-495d-be13-0668427543fd'], tags=[], tenant_id=0f8848490fb54a5cb41f1607121a115c, updated_at=2025-11-23T10:00:34Z, vlan_transparent=None, network_id=6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, port_security_enabled=True, project_id=0f8848490fb54a5cb41f1607121a115c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['c9d46e70-8b37-41f1-b62d-e1679c8d4c9c'], standard_attr_id=1430, status=DOWN, tags=[], tenant_id=0f8848490fb54a5cb41f1607121a115c, updated_at=2025-11-23T10:00:55Z on network 6f8a68b2-9a2d-4a6e-af88-0c91d081d00b#033[00m Nov 23 05:00:56 localhost dnsmasq[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/addn_hosts - 2 addresses Nov 23 05:00:56 localhost dnsmasq-dhcp[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/host Nov 23 05:00:56 localhost podman[315027]: 2025-11-23 10:00:56.112623827 +0000 UTC m=+0.056994362 container kill 9c96a2061360fc825839dfe85f70f7a6df82bcb8edacd89586ff5246e42a9ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 05:00:56 localhost dnsmasq-dhcp[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/opts Nov 23 05:00:56 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:56.361 262301 INFO neutron.agent.dhcp.agent [None req-28daed1a-4d3d-486c-b51b-c6754ced4154 - - - - - -] DHCP configuration for ports {'530a797e-2ea5-4dd8-91bc-b2f55e04dafb'} is completed#033[00m Nov 23 05:00:56 localhost podman[315070]: Nov 23 05:00:56 localhost podman[315070]: 2025-11-23 10:00:56.445884163 +0000 UTC m=+0.088696936 container create e97b6ec56ed7cc589e428cf54e3db3b37bd6520613ce8feeded84700642d8718 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0393bef-db47-4423-a9c1-5ac7043e4ec3, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 23 05:00:56 localhost systemd[1]: Started libpod-conmon-e97b6ec56ed7cc589e428cf54e3db3b37bd6520613ce8feeded84700642d8718.scope. Nov 23 05:00:56 localhost podman[315070]: 2025-11-23 10:00:56.401717076 +0000 UTC m=+0.044529929 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:00:56 localhost systemd[1]: Started libcrun container. Nov 23 05:00:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a317098a3a677ef0882aa803700716f18b24019cbed1cb214fce8aec66641774/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:00:56 localhost podman[315070]: 2025-11-23 10:00:56.521439014 +0000 UTC m=+0.164251797 container init e97b6ec56ed7cc589e428cf54e3db3b37bd6520613ce8feeded84700642d8718 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0393bef-db47-4423-a9c1-5ac7043e4ec3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:00:56 localhost podman[315070]: 2025-11-23 10:00:56.529900463 +0000 UTC m=+0.172713236 container start e97b6ec56ed7cc589e428cf54e3db3b37bd6520613ce8feeded84700642d8718 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0393bef-db47-4423-a9c1-5ac7043e4ec3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:00:56 localhost dnsmasq[315089]: started, version 2.85 cachesize 150 Nov 23 05:00:56 localhost dnsmasq[315089]: DNS service limited to local subnets Nov 23 05:00:56 localhost dnsmasq[315089]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:00:56 localhost dnsmasq[315089]: warning: no upstream servers configured Nov 23 05:00:56 localhost dnsmasq-dhcp[315089]: DHCP, static leases only on 10.102.0.0, lease time 1d Nov 23 05:00:56 localhost dnsmasq[315089]: read /var/lib/neutron/dhcp/e0393bef-db47-4423-a9c1-5ac7043e4ec3/addn_hosts - 0 addresses Nov 23 05:00:56 localhost dnsmasq-dhcp[315089]: read /var/lib/neutron/dhcp/e0393bef-db47-4423-a9c1-5ac7043e4ec3/host Nov 23 05:00:56 localhost dnsmasq-dhcp[315089]: read /var/lib/neutron/dhcp/e0393bef-db47-4423-a9c1-5ac7043e4ec3/opts Nov 23 05:00:56 localhost neutron_sriov_agent[255165]: 2025-11-23 10:00:56.612 2 INFO neutron.agent.securitygroups_rpc [None req-9e459c8e-445e-40f9-9db8-24291ed822a4 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:00:56 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:56.721 262301 INFO neutron.agent.dhcp.agent [None req-3c373308-ea69-4297-9905-5355888effd0 - - - - - -] DHCP configuration for ports {'1e47e9ec-8efb-4bd5-b6e5-5d47ca35017b'} is completed#033[00m Nov 23 05:00:56 localhost nova_compute[280939]: 2025-11-23 10:00:56.759 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:00:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v195: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:00:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:00:58 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:58.465 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:57Z, description=, device_id=eaab74f7-5e3e-4996-bbac-c869375065e2, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=47f66e99-ae03-4075-acaa-eb5e7bb2d65b, ip_allocation=immediate, mac_address=fa:16:3e:f7:9b:e3, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:00:51Z, description=, dns_domain=, id=e0393bef-db47-4423-a9c1-5ac7043e4ec3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1259116639, port_security_enabled=True, project_id=79509bc833494f3598e01347dc55dea9, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=10628, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1414, status=ACTIVE, subnets=['430038f7-28e1-48ec-8be7-710b6927462e'], tags=[], tenant_id=79509bc833494f3598e01347dc55dea9, updated_at=2025-11-23T10:00:54Z, vlan_transparent=None, network_id=e0393bef-db47-4423-a9c1-5ac7043e4ec3, port_security_enabled=False, project_id=79509bc833494f3598e01347dc55dea9, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1441, status=DOWN, tags=[], tenant_id=79509bc833494f3598e01347dc55dea9, updated_at=2025-11-23T10:00:58Z on network e0393bef-db47-4423-a9c1-5ac7043e4ec3#033[00m Nov 23 05:00:58 localhost podman[315107]: 2025-11-23 10:00:58.678369464 +0000 UTC m=+0.059745826 container kill e97b6ec56ed7cc589e428cf54e3db3b37bd6520613ce8feeded84700642d8718 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0393bef-db47-4423-a9c1-5ac7043e4ec3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 05:00:58 localhost dnsmasq[315089]: read /var/lib/neutron/dhcp/e0393bef-db47-4423-a9c1-5ac7043e4ec3/addn_hosts - 1 addresses Nov 23 05:00:58 localhost dnsmasq-dhcp[315089]: read /var/lib/neutron/dhcp/e0393bef-db47-4423-a9c1-5ac7043e4ec3/host Nov 23 05:00:58 localhost dnsmasq-dhcp[315089]: read /var/lib/neutron/dhcp/e0393bef-db47-4423-a9c1-5ac7043e4ec3/opts Nov 23 05:00:58 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:00:58.924 262301 INFO neutron.agent.dhcp.agent [None req-07c2cd8f-f6f7-4d23-a083-4c2b9d69c45a - - - - - -] DHCP configuration for ports {'47f66e99-ae03-4075-acaa-eb5e7bb2d65b'} is completed#033[00m Nov 23 05:00:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v196: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:01:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:01:00 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3407167775' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:01:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:01:00 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3407167775' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:01:00 localhost nova_compute[280939]: 2025-11-23 10:01:00.330 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:01:01 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:01.469 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:57Z, description=, device_id=eaab74f7-5e3e-4996-bbac-c869375065e2, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=47f66e99-ae03-4075-acaa-eb5e7bb2d65b, ip_allocation=immediate, mac_address=fa:16:3e:f7:9b:e3, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:00:51Z, description=, dns_domain=, id=e0393bef-db47-4423-a9c1-5ac7043e4ec3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1259116639, port_security_enabled=True, project_id=79509bc833494f3598e01347dc55dea9, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=10628, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1414, status=ACTIVE, subnets=['430038f7-28e1-48ec-8be7-710b6927462e'], tags=[], tenant_id=79509bc833494f3598e01347dc55dea9, updated_at=2025-11-23T10:00:54Z, vlan_transparent=None, network_id=e0393bef-db47-4423-a9c1-5ac7043e4ec3, port_security_enabled=False, project_id=79509bc833494f3598e01347dc55dea9, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1441, status=DOWN, tags=[], tenant_id=79509bc833494f3598e01347dc55dea9, updated_at=2025-11-23T10:00:58Z on network e0393bef-db47-4423-a9c1-5ac7043e4ec3#033[00m Nov 23 05:01:01 localhost systemd[1]: tmp-crun.IN0y1I.mount: Deactivated successfully. Nov 23 05:01:01 localhost podman[315146]: 2025-11-23 10:01:01.68688061 +0000 UTC m=+0.075103738 container kill e97b6ec56ed7cc589e428cf54e3db3b37bd6520613ce8feeded84700642d8718 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0393bef-db47-4423-a9c1-5ac7043e4ec3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118) Nov 23 05:01:01 localhost dnsmasq[315089]: read /var/lib/neutron/dhcp/e0393bef-db47-4423-a9c1-5ac7043e4ec3/addn_hosts - 1 addresses Nov 23 05:01:01 localhost dnsmasq-dhcp[315089]: read /var/lib/neutron/dhcp/e0393bef-db47-4423-a9c1-5ac7043e4ec3/host Nov 23 05:01:01 localhost dnsmasq-dhcp[315089]: read /var/lib/neutron/dhcp/e0393bef-db47-4423-a9c1-5ac7043e4ec3/opts Nov 23 05:01:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:01:01 localhost nova_compute[280939]: 2025-11-23 10:01:01.768 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:01 localhost podman[315171]: 2025-11-23 10:01:01.817033037 +0000 UTC m=+0.088696566 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_id=edpm, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.buildah.version=1.33.7) Nov 23 05:01:01 localhost podman[315171]: 2025-11-23 10:01:01.828825949 +0000 UTC m=+0.100489518 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, managed_by=edpm_ansible, release=1755695350, config_id=edpm, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 05:01:01 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:01:01 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:01.901 262301 INFO neutron.agent.dhcp.agent [None req-2888fedb-2e69-4585-9ba8-2925fa9325b5 - - - - - -] DHCP configuration for ports {'47f66e99-ae03-4075-acaa-eb5e7bb2d65b'} is completed#033[00m Nov 23 05:01:02 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:02.100 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 2001:db8::f816:3eff:feec:8f43'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2 2001:db8::f816:3eff:feec:8f43'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:01:02 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:02.102 159415 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m Nov 23 05:01:02 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:02.106 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:01:02 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:02.107 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[8dca0963-6eb3-4fb7-8294-d45373f2537d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:01:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:01:02 localhost nova_compute[280939]: 2025-11-23 10:01:02.556 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:02 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:02.872 2 INFO neutron.agent.securitygroups_rpc [None req-6be775ef-69f0-4d4b-8550-9fb2b9ad120e 2cfd21f178604be289d8bb16b3b9c18f 0f8848490fb54a5cb41f1607121a115c - - default default] Security group member updated ['c9d46e70-8b37-41f1-b62d-e1679c8d4c9c']#033[00m Nov 23 05:01:03 localhost dnsmasq[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/addn_hosts - 1 addresses Nov 23 05:01:03 localhost dnsmasq-dhcp[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/host Nov 23 05:01:03 localhost dnsmasq-dhcp[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/opts Nov 23 05:01:03 localhost podman[315214]: 2025-11-23 10:01:03.117713637 +0000 UTC m=+0.059384635 container kill 9c96a2061360fc825839dfe85f70f7a6df82bcb8edacd89586ff5246e42a9ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 05:01:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v198: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:01:03 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:03.410 2 INFO neutron.agent.securitygroups_rpc [None req-4c75c4e7-1ac8-40bf-8bb2-ec0e75e4208b a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m Nov 23 05:01:04 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:04.764 262301 INFO neutron.agent.linux.ip_lib [None req-a0e2955d-79c3-4373-b7d4-b3f808364f11 - - - - - -] Device tap26b19460-c4 cannot be used as it has no MAC address#033[00m Nov 23 05:01:04 localhost nova_compute[280939]: 2025-11-23 10:01:04.812 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:04 localhost kernel: device tap26b19460-c4 entered promiscuous mode Nov 23 05:01:04 localhost ovn_controller[153771]: 2025-11-23T10:01:04Z|00175|binding|INFO|Claiming lport 26b19460-c466-4094-bb15-ed2a3d1e848d for this chassis. Nov 23 05:01:04 localhost ovn_controller[153771]: 2025-11-23T10:01:04Z|00176|binding|INFO|26b19460-c466-4094-bb15-ed2a3d1e848d: Claiming unknown Nov 23 05:01:04 localhost nova_compute[280939]: 2025-11-23 10:01:04.823 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:04 localhost NetworkManager[5966]: [1763892064.8263] manager: (tap26b19460-c4): new Generic device (/org/freedesktop/NetworkManager/Devices/32) Nov 23 05:01:04 localhost systemd-udevd[315245]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:01:04 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:04.835 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-a5dafb9f-79ee-48c9-a407-ff6081d49752', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5dafb9f-79ee-48c9-a407-ff6081d49752', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f552ffbc49734cd69f687383dc092a2b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9579bb8c-2fff-4855-a0e2-47a12da6098f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=26b19460-c466-4094-bb15-ed2a3d1e848d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:01:04 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:04.837 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 26b19460-c466-4094-bb15-ed2a3d1e848d in datapath a5dafb9f-79ee-48c9-a407-ff6081d49752 bound to our chassis#033[00m Nov 23 05:01:04 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:04.839 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a5dafb9f-79ee-48c9-a407-ff6081d49752 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:01:04 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:04.840 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[d09d1bc3-5765-43fd-80d9-92f7747b1bea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:01:04 localhost journal[229336]: ethtool ioctl error on tap26b19460-c4: No such device Nov 23 05:01:04 localhost ovn_controller[153771]: 2025-11-23T10:01:04Z|00177|binding|INFO|Setting lport 26b19460-c466-4094-bb15-ed2a3d1e848d ovn-installed in OVS Nov 23 05:01:04 localhost ovn_controller[153771]: 2025-11-23T10:01:04Z|00178|binding|INFO|Setting lport 26b19460-c466-4094-bb15-ed2a3d1e848d up in Southbound Nov 23 05:01:04 localhost journal[229336]: ethtool ioctl error on tap26b19460-c4: No such device Nov 23 05:01:04 localhost nova_compute[280939]: 2025-11-23 10:01:04.861 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:04 localhost journal[229336]: ethtool ioctl error on tap26b19460-c4: No such device Nov 23 05:01:04 localhost journal[229336]: ethtool ioctl error on tap26b19460-c4: No such device Nov 23 05:01:04 localhost journal[229336]: ethtool ioctl error on tap26b19460-c4: No such device Nov 23 05:01:04 localhost journal[229336]: ethtool ioctl error on tap26b19460-c4: No such device Nov 23 05:01:04 localhost journal[229336]: ethtool ioctl error on tap26b19460-c4: No such device Nov 23 05:01:04 localhost journal[229336]: ethtool ioctl error on tap26b19460-c4: No such device Nov 23 05:01:04 localhost nova_compute[280939]: 2025-11-23 10:01:04.898 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:04 localhost nova_compute[280939]: 2025-11-23 10:01:04.926 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:05 localhost nova_compute[280939]: 2025-11-23 10:01:05.331 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v199: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:01:05 localhost systemd[1]: tmp-crun.Q6RUyN.mount: Deactivated successfully. Nov 23 05:01:05 localhost dnsmasq[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/addn_hosts - 0 addresses Nov 23 05:01:05 localhost dnsmasq-dhcp[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/host Nov 23 05:01:05 localhost dnsmasq-dhcp[314790]: read /var/lib/neutron/dhcp/6f8a68b2-9a2d-4a6e-af88-0c91d081d00b/opts Nov 23 05:01:05 localhost podman[315311]: 2025-11-23 10:01:05.537352636 +0000 UTC m=+0.074180549 container kill 9c96a2061360fc825839dfe85f70f7a6df82bcb8edacd89586ff5246e42a9ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Nov 23 05:01:05 localhost nova_compute[280939]: 2025-11-23 10:01:05.750 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:05 localhost ovn_controller[153771]: 2025-11-23T10:01:05Z|00179|binding|INFO|Releasing lport c998f5f4-5ad9-4b85-914f-097dab6b3c9d from this chassis (sb_readonly=0) Nov 23 05:01:05 localhost kernel: device tapc998f5f4-5a left promiscuous mode Nov 23 05:01:05 localhost ovn_controller[153771]: 2025-11-23T10:01:05Z|00180|binding|INFO|Setting lport c998f5f4-5ad9-4b85-914f-097dab6b3c9d down in Southbound Nov 23 05:01:05 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:05.762 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-6f8a68b2-9a2d-4a6e-af88-0c91d081d00b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f8a68b2-9a2d-4a6e-af88-0c91d081d00b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0f8848490fb54a5cb41f1607121a115c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=baa61e63-a759-4aa3-8605-4bc7bccad076, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c998f5f4-5ad9-4b85-914f-097dab6b3c9d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:01:05 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:05.764 159415 INFO neutron.agent.ovn.metadata.agent [-] Port c998f5f4-5ad9-4b85-914f-097dab6b3c9d in datapath 6f8a68b2-9a2d-4a6e-af88-0c91d081d00b unbound from our chassis#033[00m Nov 23 05:01:05 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:05.766 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:01:05 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:05.767 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[ff10d78a-1ca8-4be9-8482-bebf1b54322f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:01:05 localhost nova_compute[280939]: 2025-11-23 10:01:05.769 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:05 localhost podman[315355]: Nov 23 05:01:05 localhost podman[315355]: 2025-11-23 10:01:05.829957714 +0000 UTC m=+0.098311350 container create 6af372410a6e69cbcc2f44ed005c2be0b99ce9fb6062b75a258aa737a8a8baeb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5dafb9f-79ee-48c9-a407-ff6081d49752, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 23 05:01:05 localhost systemd[1]: Started libpod-conmon-6af372410a6e69cbcc2f44ed005c2be0b99ce9fb6062b75a258aa737a8a8baeb.scope. Nov 23 05:01:05 localhost podman[315355]: 2025-11-23 10:01:05.785189899 +0000 UTC m=+0.053543545 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:01:05 localhost systemd[1]: tmp-crun.cjDNm7.mount: Deactivated successfully. Nov 23 05:01:05 localhost systemd[1]: Started libcrun container. Nov 23 05:01:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff224528243f060a2343a1e09e777d34de65ff5333323ac25aed8e5dce1b4159/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:01:05 localhost podman[315355]: 2025-11-23 10:01:05.915897933 +0000 UTC m=+0.184251579 container init 6af372410a6e69cbcc2f44ed005c2be0b99ce9fb6062b75a258aa737a8a8baeb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5dafb9f-79ee-48c9-a407-ff6081d49752, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 23 05:01:05 localhost podman[315355]: 2025-11-23 10:01:05.924296651 +0000 UTC m=+0.192650297 container start 6af372410a6e69cbcc2f44ed005c2be0b99ce9fb6062b75a258aa737a8a8baeb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5dafb9f-79ee-48c9-a407-ff6081d49752, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118) Nov 23 05:01:05 localhost dnsmasq[315375]: started, version 2.85 cachesize 150 Nov 23 05:01:05 localhost dnsmasq[315375]: DNS service limited to local subnets Nov 23 05:01:05 localhost dnsmasq[315375]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:01:05 localhost dnsmasq[315375]: warning: no upstream servers configured Nov 23 05:01:05 localhost dnsmasq-dhcp[315375]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:01:05 localhost dnsmasq[315375]: read /var/lib/neutron/dhcp/a5dafb9f-79ee-48c9-a407-ff6081d49752/addn_hosts - 0 addresses Nov 23 05:01:05 localhost dnsmasq-dhcp[315375]: read /var/lib/neutron/dhcp/a5dafb9f-79ee-48c9-a407-ff6081d49752/host Nov 23 05:01:05 localhost dnsmasq-dhcp[315375]: read /var/lib/neutron/dhcp/a5dafb9f-79ee-48c9-a407-ff6081d49752/opts Nov 23 05:01:06 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:06.217 262301 INFO neutron.agent.dhcp.agent [None req-4d1d8658-59b5-4f32-aeaf-25920c399643 - - - - - -] DHCP configuration for ports {'879d5e05-aa55-4619-9ad5-126c81ad35bb'} is completed#033[00m Nov 23 05:01:06 localhost openstack_network_exporter[241732]: ERROR 10:01:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:01:06 localhost openstack_network_exporter[241732]: ERROR 10:01:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:01:06 localhost openstack_network_exporter[241732]: ERROR 10:01:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:01:06 localhost openstack_network_exporter[241732]: ERROR 10:01:06 collector.go:208: status(315244): open /host/proc/22135/task/315244/status: no such file or directory Nov 23 05:01:06 localhost openstack_network_exporter[241732]: ERROR 10:01:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:01:06 localhost openstack_network_exporter[241732]: Nov 23 05:01:06 localhost openstack_network_exporter[241732]: ERROR 10:01:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:01:06 localhost openstack_network_exporter[241732]: Nov 23 05:01:06 localhost nova_compute[280939]: 2025-11-23 10:01:06.770 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:06 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e112 do_prune osdmap full prune enabled Nov 23 05:01:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:01:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:01:06 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e113 e113: 6 total, 6 up, 6 in Nov 23 05:01:06 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e113: 6 total, 6 up, 6 in Nov 23 05:01:06 localhost podman[315377]: 2025-11-23 10:01:06.906967704 +0000 UTC m=+0.080824743 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:01:06 localhost podman[315377]: 2025-11-23 10:01:06.922341577 +0000 UTC m=+0.096198606 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 05:01:06 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:01:07 localhost podman[315376]: 2025-11-23 10:01:07.011515705 +0000 UTC m=+0.188902262 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0) Nov 23 05:01:07 localhost podman[315376]: 2025-11-23 10:01:07.046793529 +0000 UTC m=+0.224180106 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 23 05:01:07 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:01:07 localhost nova_compute[280939]: 2025-11-23 10:01:07.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:01:07 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:07.213 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.3 2001:db8::f816:3eff:feec:8f43'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 2001:db8::f816:3eff:feec:8f43'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:01:07 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:07.214 159415 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m Nov 23 05:01:07 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:07.217 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:01:07 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:07.218 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[6ceaa3d4-2fd8-42c0-abd2-68d35736f105]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:01:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v201: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail Nov 23 05:01:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:01:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e113 do_prune osdmap full prune enabled Nov 23 05:01:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e114 e114: 6 total, 6 up, 6 in Nov 23 05:01:07 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e114: 6 total, 6 up, 6 in Nov 23 05:01:08 localhost nova_compute[280939]: 2025-11-23 10:01:08.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:01:08 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:08.494 2 INFO neutron.agent.securitygroups_rpc [None req-d30670ed-c29e-4282-92c2-f353a53316ea 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:01:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:01:09 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2910603634' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:01:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:01:09 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2910603634' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:01:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v203: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s Nov 23 05:01:09 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:09.410 2 INFO neutron.agent.securitygroups_rpc [None req-ee598349-ae04-45e4-9403-8b439fe516e0 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:01:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:09.743 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:01:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:09.743 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:01:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:09.744 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:01:09 localhost dnsmasq[314790]: exiting on receipt of SIGTERM Nov 23 05:01:09 localhost podman[315436]: 2025-11-23 10:01:09.771565501 +0000 UTC m=+0.056853888 container kill 9c96a2061360fc825839dfe85f70f7a6df82bcb8edacd89586ff5246e42a9ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2) Nov 23 05:01:09 localhost systemd[1]: libpod-9c96a2061360fc825839dfe85f70f7a6df82bcb8edacd89586ff5246e42a9ccb.scope: Deactivated successfully. Nov 23 05:01:09 localhost podman[315450]: 2025-11-23 10:01:09.857320354 +0000 UTC m=+0.074131078 container died 9c96a2061360fc825839dfe85f70f7a6df82bcb8edacd89586ff5246e42a9ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:01:09 localhost systemd[1]: tmp-crun.B8ngZI.mount: Deactivated successfully. Nov 23 05:01:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c96a2061360fc825839dfe85f70f7a6df82bcb8edacd89586ff5246e42a9ccb-userdata-shm.mount: Deactivated successfully. Nov 23 05:01:09 localhost podman[315450]: 2025-11-23 10:01:09.896168218 +0000 UTC m=+0.112978891 container cleanup 9c96a2061360fc825839dfe85f70f7a6df82bcb8edacd89586ff5246e42a9ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 05:01:09 localhost systemd[1]: libpod-conmon-9c96a2061360fc825839dfe85f70f7a6df82bcb8edacd89586ff5246e42a9ccb.scope: Deactivated successfully. Nov 23 05:01:09 localhost podman[315457]: 2025-11-23 10:01:09.935931299 +0000 UTC m=+0.136165663 container remove 9c96a2061360fc825839dfe85f70f7a6df82bcb8edacd89586ff5246e42a9ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f8a68b2-9a2d-4a6e-af88-0c91d081d00b, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:01:10 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:10.197 262301 INFO neutron.agent.dhcp.agent [None req-d5bf2f7c-6ae3-417c-b079-59fc4ed9ca09 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:01:10 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:10.198 262301 INFO neutron.agent.dhcp.agent [None req-d5bf2f7c-6ae3-417c-b079-59fc4ed9ca09 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:01:10 localhost nova_compute[280939]: 2025-11-23 10:01:10.371 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:10 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:10.756 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:01:10 localhost systemd[1]: var-lib-containers-storage-overlay-89d03d6b5395b809abf402511d5cd06f0c2661d7c254922bc7328733d40449cb-merged.mount: Deactivated successfully. Nov 23 05:01:10 localhost systemd[1]: run-netns-qdhcp\x2d6f8a68b2\x2d9a2d\x2d4a6e\x2daf88\x2d0c91d081d00b.mount: Deactivated successfully. Nov 23 05:01:11 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:11.036 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:01:10Z, description=, device_id=de9f983f-1f13-46ad-82e3-496e0a20a1d1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0210859f-2a1b-494d-bd9a-9ddc44b36420, ip_allocation=immediate, mac_address=fa:16:3e:da:34:64, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:01:01Z, description=, dns_domain=, id=a5dafb9f-79ee-48c9-a407-ff6081d49752, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestFqdnHostnames-256689770-network, port_security_enabled=True, project_id=f552ffbc49734cd69f687383dc092a2b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53541, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1454, status=ACTIVE, subnets=['447fdff4-c82a-446e-ba6a-4446b9c0a9ba'], tags=[], tenant_id=f552ffbc49734cd69f687383dc092a2b, updated_at=2025-11-23T10:01:03Z, vlan_transparent=None, network_id=a5dafb9f-79ee-48c9-a407-ff6081d49752, port_security_enabled=False, project_id=f552ffbc49734cd69f687383dc092a2b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1481, status=DOWN, tags=[], tenant_id=f552ffbc49734cd69f687383dc092a2b, updated_at=2025-11-23T10:01:10Z on network a5dafb9f-79ee-48c9-a407-ff6081d49752#033[00m Nov 23 05:01:11 localhost nova_compute[280939]: 2025-11-23 10:01:11.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:01:11 localhost nova_compute[280939]: 2025-11-23 10:01:11.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 05:01:11 localhost nova_compute[280939]: 2025-11-23 10:01:11.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 05:01:11 localhost nova_compute[280939]: 2025-11-23 10:01:11.154 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 05:01:11 localhost nova_compute[280939]: 2025-11-23 10:01:11.154 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:01:11 localhost nova_compute[280939]: 2025-11-23 10:01:11.171 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:11 localhost nova_compute[280939]: 2025-11-23 10:01:11.234 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:11 localhost podman[315497]: 2025-11-23 10:01:11.27307875 +0000 UTC m=+0.055606529 container kill 6af372410a6e69cbcc2f44ed005c2be0b99ce9fb6062b75a258aa737a8a8baeb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5dafb9f-79ee-48c9-a407-ff6081d49752, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 05:01:11 localhost dnsmasq[315375]: read /var/lib/neutron/dhcp/a5dafb9f-79ee-48c9-a407-ff6081d49752/addn_hosts - 1 addresses Nov 23 05:01:11 localhost dnsmasq-dhcp[315375]: read /var/lib/neutron/dhcp/a5dafb9f-79ee-48c9-a407-ff6081d49752/host Nov 23 05:01:11 localhost dnsmasq-dhcp[315375]: read /var/lib/neutron/dhcp/a5dafb9f-79ee-48c9-a407-ff6081d49752/opts Nov 23 05:01:11 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:11.279 2 INFO neutron.agent.securitygroups_rpc [None req-4b64e528-1440-49a5-b870-0a0f4d60c275 a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m Nov 23 05:01:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v204: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s Nov 23 05:01:11 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:11.548 262301 INFO neutron.agent.dhcp.agent [None req-644db8ab-b6fc-477f-8579-963e1a2572b3 - - - - - -] DHCP configuration for ports {'0210859f-2a1b-494d-bd9a-9ddc44b36420'} is completed#033[00m Nov 23 05:01:11 localhost nova_compute[280939]: 2025-11-23 10:01:11.798 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:01:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e114 do_prune osdmap full prune enabled Nov 23 05:01:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e115 e115: 6 total, 6 up, 6 in Nov 23 05:01:12 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e115: 6 total, 6 up, 6 in Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.578 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:01:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:01:13 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:13.101 2 INFO neutron.agent.securitygroups_rpc [None req-a6cbb5e4-51d9-4058-bf96-80e547b16a25 a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m Nov 23 05:01:13 localhost nova_compute[280939]: 2025-11-23 10:01:13.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:01:13 localhost nova_compute[280939]: 2025-11-23 10:01:13.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:01:13 localhost nova_compute[280939]: 2025-11-23 10:01:13.134 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 05:01:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v206: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 467 B/s rd, 934 B/s wr, 1 op/s Nov 23 05:01:13 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:13.819 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.3 2001:db8::f816:3eff:feec:8f43'], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:01:13 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:13.821 159415 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m Nov 23 05:01:13 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:13.824 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:01:13 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:13.825 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[ec954d08-3dca-4331-a4b4-6070503aba3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:01:13 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:13.992 262301 INFO neutron.agent.linux.ip_lib [None req-94833212-7128-4112-bda2-b3668fe2baaa - - - - - -] Device tap19441b64-74 cannot be used as it has no MAC address#033[00m Nov 23 05:01:14 localhost nova_compute[280939]: 2025-11-23 10:01:14.048 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:14 localhost kernel: device tap19441b64-74 entered promiscuous mode Nov 23 05:01:14 localhost ovn_controller[153771]: 2025-11-23T10:01:14Z|00181|binding|INFO|Claiming lport 19441b64-7478-4dca-a309-b5edfa93c3de for this chassis. Nov 23 05:01:14 localhost NetworkManager[5966]: [1763892074.0558] manager: (tap19441b64-74): new Generic device (/org/freedesktop/NetworkManager/Devices/33) Nov 23 05:01:14 localhost ovn_controller[153771]: 2025-11-23T10:01:14Z|00182|binding|INFO|19441b64-7478-4dca-a309-b5edfa93c3de: Claiming unknown Nov 23 05:01:14 localhost nova_compute[280939]: 2025-11-23 10:01:14.057 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:14 localhost systemd-udevd[315527]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:01:14 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:14.070 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c05c08b4d8794ff1b33e7233ec64d938', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb7bb7f2-4791-41f8-bcdb-c6045c345937, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=19441b64-7478-4dca-a309-b5edfa93c3de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:01:14 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:14.071 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 19441b64-7478-4dca-a309-b5edfa93c3de in datapath b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e bound to our chassis#033[00m Nov 23 05:01:14 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:14.073 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:01:14 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:14.074 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[8405f0c6-4135-40dc-a60b-1f9a716e37f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:01:14 localhost journal[229336]: ethtool ioctl error on tap19441b64-74: No such device Nov 23 05:01:14 localhost ovn_controller[153771]: 2025-11-23T10:01:14Z|00183|binding|INFO|Setting lport 19441b64-7478-4dca-a309-b5edfa93c3de ovn-installed in OVS Nov 23 05:01:14 localhost ovn_controller[153771]: 2025-11-23T10:01:14Z|00184|binding|INFO|Setting lport 19441b64-7478-4dca-a309-b5edfa93c3de up in Southbound Nov 23 05:01:14 localhost journal[229336]: ethtool ioctl error on tap19441b64-74: No such device Nov 23 05:01:14 localhost nova_compute[280939]: 2025-11-23 10:01:14.089 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:14 localhost journal[229336]: ethtool ioctl error on tap19441b64-74: No such device Nov 23 05:01:14 localhost journal[229336]: ethtool ioctl error on tap19441b64-74: No such device Nov 23 05:01:14 localhost journal[229336]: ethtool ioctl error on tap19441b64-74: No such device Nov 23 05:01:14 localhost journal[229336]: ethtool ioctl error on tap19441b64-74: No such device Nov 23 05:01:14 localhost journal[229336]: ethtool ioctl error on tap19441b64-74: No such device Nov 23 05:01:14 localhost journal[229336]: ethtool ioctl error on tap19441b64-74: No such device Nov 23 05:01:14 localhost nova_compute[280939]: 2025-11-23 10:01:14.126 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:14 localhost nova_compute[280939]: 2025-11-23 10:01:14.152 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:14 localhost podman[315598]: Nov 23 05:01:14 localhost podman[315598]: 2025-11-23 10:01:14.982601358 +0000 UTC m=+0.087099776 container create ada970b224a0587689a6b9067fe2dab29d5c68433a827c2c89780dd607b700b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 05:01:15 localhost systemd[1]: Started libpod-conmon-ada970b224a0587689a6b9067fe2dab29d5c68433a827c2c89780dd607b700b8.scope. Nov 23 05:01:15 localhost systemd[1]: Started libcrun container. Nov 23 05:01:15 localhost podman[315598]: 2025-11-23 10:01:14.94033881 +0000 UTC m=+0.044837278 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:01:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e42cbed5fb97c672d5a9c663ea68e2e3bb925e6a2acd1f1331d2d18766e9dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:01:15 localhost podman[315598]: 2025-11-23 10:01:15.050098431 +0000 UTC m=+0.154596839 container init ada970b224a0587689a6b9067fe2dab29d5c68433a827c2c89780dd607b700b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 23 05:01:15 localhost podman[315598]: 2025-11-23 10:01:15.059935453 +0000 UTC m=+0.164433861 container start ada970b224a0587689a6b9067fe2dab29d5c68433a827c2c89780dd607b700b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 05:01:15 localhost dnsmasq[315616]: started, version 2.85 cachesize 150 Nov 23 05:01:15 localhost dnsmasq[315616]: DNS service limited to local subnets Nov 23 05:01:15 localhost dnsmasq[315616]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:01:15 localhost dnsmasq[315616]: warning: no upstream servers configured Nov 23 05:01:15 localhost dnsmasq-dhcp[315616]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:01:15 localhost dnsmasq[315616]: read /var/lib/neutron/dhcp/b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e/addn_hosts - 0 addresses Nov 23 05:01:15 localhost dnsmasq-dhcp[315616]: read /var/lib/neutron/dhcp/b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e/host Nov 23 05:01:15 localhost dnsmasq-dhcp[315616]: read /var/lib/neutron/dhcp/b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e/opts Nov 23 05:01:15 localhost nova_compute[280939]: 2025-11-23 10:01:15.129 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:01:15 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:15.229 262301 INFO neutron.agent.dhcp.agent [None req-1301c67f-13cd-4c0d-bed6-14673268e696 - - - - - -] DHCP configuration for ports {'915e15f5-2987-46f7-8552-435233ff4152'} is completed#033[00m Nov 23 05:01:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v207: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 15 KiB/s rd, 1.1 KiB/s wr, 21 op/s Nov 23 05:01:15 localhost nova_compute[280939]: 2025-11-23 10:01:15.396 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:15 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:15.502 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:01:10Z, description=, device_id=de9f983f-1f13-46ad-82e3-496e0a20a1d1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0210859f-2a1b-494d-bd9a-9ddc44b36420, ip_allocation=immediate, mac_address=fa:16:3e:da:34:64, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:01:01Z, description=, dns_domain=, id=a5dafb9f-79ee-48c9-a407-ff6081d49752, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestFqdnHostnames-256689770-network, port_security_enabled=True, project_id=f552ffbc49734cd69f687383dc092a2b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53541, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1454, status=ACTIVE, subnets=['447fdff4-c82a-446e-ba6a-4446b9c0a9ba'], tags=[], tenant_id=f552ffbc49734cd69f687383dc092a2b, updated_at=2025-11-23T10:01:03Z, vlan_transparent=None, network_id=a5dafb9f-79ee-48c9-a407-ff6081d49752, port_security_enabled=False, project_id=f552ffbc49734cd69f687383dc092a2b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1481, status=DOWN, tags=[], tenant_id=f552ffbc49734cd69f687383dc092a2b, updated_at=2025-11-23T10:01:10Z on network a5dafb9f-79ee-48c9-a407-ff6081d49752#033[00m Nov 23 05:01:15 localhost dnsmasq[315375]: read /var/lib/neutron/dhcp/a5dafb9f-79ee-48c9-a407-ff6081d49752/addn_hosts - 1 addresses Nov 23 05:01:15 localhost dnsmasq-dhcp[315375]: read /var/lib/neutron/dhcp/a5dafb9f-79ee-48c9-a407-ff6081d49752/host Nov 23 05:01:15 localhost dnsmasq-dhcp[315375]: read /var/lib/neutron/dhcp/a5dafb9f-79ee-48c9-a407-ff6081d49752/opts Nov 23 05:01:15 localhost podman[315633]: 2025-11-23 10:01:15.706203433 +0000 UTC m=+0.058009243 container kill 6af372410a6e69cbcc2f44ed005c2be0b99ce9fb6062b75a258aa737a8a8baeb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5dafb9f-79ee-48c9-a407-ff6081d49752, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0) Nov 23 05:01:16 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:15.999 262301 INFO neutron.agent.dhcp.agent [None req-9948afe1-9a4e-495d-8c74-4125181b74af - - - - - -] DHCP configuration for ports {'0210859f-2a1b-494d-bd9a-9ddc44b36420'} is completed#033[00m Nov 23 05:01:16 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:16.338 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2 2001:db8::f816:3eff:feec:8f43'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:01:16 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:16.340 159415 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m Nov 23 05:01:16 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:16.344 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:01:16 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:16.345 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[5161ad1e-d1a3-4d5b-9521-0a749702a609]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:01:16 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:16.642 2 INFO neutron.agent.securitygroups_rpc [None req-49a4dc10-ae4e-41b1-8135-2f2053e37dc6 a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m Nov 23 05:01:16 localhost nova_compute[280939]: 2025-11-23 10:01:16.842 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:17 localhost podman[239764]: time="2025-11-23T10:01:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:01:17 localhost podman[239764]: @ - - [23/Nov/2025:10:01:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159971 "" "Go-http-client/1.1" Nov 23 05:01:17 localhost nova_compute[280939]: 2025-11-23 10:01:17.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:01:17 localhost podman[239764]: @ - - [23/Nov/2025:10:01:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20155 "" "Go-http-client/1.1" Nov 23 05:01:17 localhost nova_compute[280939]: 2025-11-23 10:01:17.163 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:01:17 localhost nova_compute[280939]: 2025-11-23 10:01:17.163 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:01:17 localhost nova_compute[280939]: 2025-11-23 10:01:17.164 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:01:17 localhost nova_compute[280939]: 2025-11-23 10:01:17.165 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 05:01:17 localhost nova_compute[280939]: 2025-11-23 10:01:17.165 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:01:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v208: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 965 B/s wr, 18 op/s Nov 23 05:01:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:01:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:01:17 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1119170908' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:01:17 localhost nova_compute[280939]: 2025-11-23 10:01:17.628 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:01:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:01:17 localhost nova_compute[280939]: 2025-11-23 10:01:17.837 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 05:01:17 localhost nova_compute[280939]: 2025-11-23 10:01:17.840 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11562MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 05:01:17 localhost nova_compute[280939]: 2025-11-23 10:01:17.840 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:01:17 localhost nova_compute[280939]: 2025-11-23 10:01:17.841 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:01:17 localhost podman[315675]: 2025-11-23 10:01:17.893478276 +0000 UTC m=+0.078236974 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Nov 23 05:01:17 localhost podman[315675]: 2025-11-23 10:01:17.902822473 +0000 UTC m=+0.087581161 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 23 05:01:17 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:01:17 localhost nova_compute[280939]: 2025-11-23 10:01:17.925 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 05:01:17 localhost nova_compute[280939]: 2025-11-23 10:01:17.926 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 05:01:17 localhost nova_compute[280939]: 2025-11-23 10:01:17.945 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:01:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:01:18 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/274354533' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:01:18 localhost nova_compute[280939]: 2025-11-23 10:01:18.369 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:01:18 localhost nova_compute[280939]: 2025-11-23 10:01:18.376 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 05:01:18 localhost nova_compute[280939]: 2025-11-23 10:01:18.392 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 05:01:18 localhost nova_compute[280939]: 2025-11-23 10:01:18.395 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 05:01:18 localhost nova_compute[280939]: 2025-11-23 10:01:18.395 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:01:18 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:18.569 2 INFO neutron.agent.securitygroups_rpc [None req-599670a7-0640-44a2-ad54-406dd4624d40 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:01:19 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:19.264 2 INFO neutron.agent.securitygroups_rpc [None req-7242a629-d88b-4313-9e67-39aa189122ef fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m Nov 23 05:01:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v209: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 307 B/s wr, 16 op/s Nov 23 05:01:19 localhost nova_compute[280939]: 2025-11-23 10:01:19.395 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:01:20 localhost nova_compute[280939]: 2025-11-23 10:01:20.128 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:01:20 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:20.234 2 INFO neutron.agent.securitygroups_rpc [None req-55e03d23-65b9-4d8a-853a-a5da3e2be7a3 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:01:20 localhost ovn_controller[153771]: 2025-11-23T10:01:20Z|00185|ovn_bfd|INFO|Enabled BFD on interface ovn-c237ed-0 Nov 23 05:01:20 localhost ovn_controller[153771]: 2025-11-23T10:01:20Z|00186|ovn_bfd|INFO|Enabled BFD on interface ovn-49b8a0-0 Nov 23 05:01:20 localhost ovn_controller[153771]: 2025-11-23T10:01:20Z|00187|ovn_bfd|INFO|Enabled BFD on interface ovn-b7d5b3-0 Nov 23 05:01:20 localhost nova_compute[280939]: 2025-11-23 10:01:20.435 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:20 localhost nova_compute[280939]: 2025-11-23 10:01:20.439 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:20 localhost nova_compute[280939]: 2025-11-23 10:01:20.447 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:20 localhost nova_compute[280939]: 2025-11-23 10:01:20.459 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:20 localhost nova_compute[280939]: 2025-11-23 10:01:20.470 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:20 localhost nova_compute[280939]: 2025-11-23 10:01:20.510 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:20 localhost nova_compute[280939]: 2025-11-23 10:01:20.517 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 307 B/s wr, 16 op/s Nov 23 05:01:21 localhost nova_compute[280939]: 2025-11-23 10:01:21.396 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:21 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:21.882 2 INFO neutron.agent.securitygroups_rpc [None req-da0cfd7f-9c49-4dab-bd78-effdeef69255 fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m Nov 23 05:01:21 localhost nova_compute[280939]: 2025-11-23 10:01:21.884 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:22 localhost nova_compute[280939]: 2025-11-23 10:01:22.287 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:22 localhost nova_compute[280939]: 2025-11-23 10:01:22.329 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:01:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_10:01:23 Nov 23 05:01:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 05:01:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 05:01:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['manila_data', '.mgr', 'manila_metadata', 'backups', 'images', 'volumes', 'vms'] Nov 23 05:01:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 05:01:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:01:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:01:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:01:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:01:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:01:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:01:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v211: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 283 B/s wr, 14 op/s Nov 23 05:01:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 05:01:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:01:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 05:01:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:01:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Nov 23 05:01:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:01:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.443522589800856e-05 quantized to 32 (current 32) Nov 23 05:01:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:01:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 23 05:01:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:01:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 05:01:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:01:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 05:01:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:01:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Nov 23 05:01:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 05:01:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:01:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:01:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:01:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:01:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 05:01:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:01:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:01:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:01:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0. Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:23.618551) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46 Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892083618626, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2299, "num_deletes": 259, "total_data_size": 2270840, "memory_usage": 2320416, "flush_reason": "Manual Compaction"} Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892083630787, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 2197040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25593, "largest_seqno": 27891, "table_properties": {"data_size": 2187848, "index_size": 5697, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 20399, "raw_average_key_size": 21, "raw_value_size": 2168831, "raw_average_value_size": 2254, "num_data_blocks": 249, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891914, "oldest_key_time": 1763891914, "file_creation_time": 1763892083, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}} Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 12284 microseconds, and 6077 cpu microseconds. Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:23.630840) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 2197040 bytes OK Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:23.630862) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:23.632720) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:23.632739) EVENT_LOG_v1 {"time_micros": 1763892083632733, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:23.632761) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2261196, prev total WAL file size 2261196, number of live WAL files 2. Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:23.633488) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end) Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(2145KB)], [45(16MB)] Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892083633539, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 19079859, "oldest_snapshot_seqno": -1} Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 12594 keys, 16078732 bytes, temperature: kUnknown Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892083723382, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 16078732, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16007010, "index_size": 39173, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31493, "raw_key_size": 338630, "raw_average_key_size": 26, "raw_value_size": 15792307, "raw_average_value_size": 1253, "num_data_blocks": 1477, "num_entries": 12594, "num_filter_entries": 12594, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763892083, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}} Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:23.723744) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 16078732 bytes Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:23.725785) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 212.1 rd, 178.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 16.1 +0.0 blob) out(15.3 +0.0 blob), read-write-amplify(16.0) write-amplify(7.3) OK, records in: 13127, records dropped: 533 output_compression: NoCompression Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:23.725816) EVENT_LOG_v1 {"time_micros": 1763892083725800, "job": 26, "event": "compaction_finished", "compaction_time_micros": 89936, "compaction_time_cpu_micros": 43657, "output_level": 6, "num_output_files": 1, "total_output_size": 16078732, "num_input_records": 13127, "num_output_records": 12594, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892083726282, "job": 26, "event": "table_file_deletion", "file_number": 47} Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892083728871, "job": 26, "event": "table_file_deletion", "file_number": 45} Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:23.633424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:23.728957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:23.728964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:23.728967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:23.728970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:01:23 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:23.728973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:01:24 localhost ovn_controller[153771]: 2025-11-23T10:01:24Z|00188|ovn_bfd|INFO|Disabled BFD on interface ovn-c237ed-0 Nov 23 05:01:24 localhost ovn_controller[153771]: 2025-11-23T10:01:24Z|00189|ovn_bfd|INFO|Disabled BFD on interface ovn-49b8a0-0 Nov 23 05:01:24 localhost ovn_controller[153771]: 2025-11-23T10:01:24Z|00190|ovn_bfd|INFO|Disabled BFD on interface ovn-b7d5b3-0 Nov 23 05:01:24 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:24.135 2 INFO neutron.agent.securitygroups_rpc [None req-cef856d0-bc0f-421d-b6e3-a61c28d38f99 fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m Nov 23 05:01:24 localhost nova_compute[280939]: 2025-11-23 10:01:24.137 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:24 localhost nova_compute[280939]: 2025-11-23 10:01:24.141 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:24 localhost nova_compute[280939]: 2025-11-23 10:01:24.167 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:24 localhost dnsmasq[315375]: read /var/lib/neutron/dhcp/a5dafb9f-79ee-48c9-a407-ff6081d49752/addn_hosts - 0 addresses Nov 23 05:01:24 localhost dnsmasq-dhcp[315375]: read /var/lib/neutron/dhcp/a5dafb9f-79ee-48c9-a407-ff6081d49752/host Nov 23 05:01:24 localhost dnsmasq-dhcp[315375]: read /var/lib/neutron/dhcp/a5dafb9f-79ee-48c9-a407-ff6081d49752/opts Nov 23 05:01:24 localhost podman[315735]: 2025-11-23 10:01:24.182125953 +0000 UTC m=+0.075892432 container kill 6af372410a6e69cbcc2f44ed005c2be0b99ce9fb6062b75a258aa737a8a8baeb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5dafb9f-79ee-48c9-a407-ff6081d49752, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0) Nov 23 05:01:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:01:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:01:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:01:24 localhost systemd[1]: tmp-crun.2LxXdA.mount: Deactivated successfully. Nov 23 05:01:24 localhost podman[315750]: 2025-11-23 10:01:24.308772353 +0000 UTC m=+0.097196797 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 05:01:24 localhost podman[315750]: 2025-11-23 10:01:24.348381409 +0000 UTC m=+0.136805843 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:01:24 localhost podman[315751]: 2025-11-23 10:01:24.360143781 +0000 UTC m=+0.142642753 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 23 05:01:24 localhost ovn_controller[153771]: 2025-11-23T10:01:24Z|00191|binding|INFO|Releasing lport 26b19460-c466-4094-bb15-ed2a3d1e848d from this chassis (sb_readonly=0) Nov 23 05:01:24 localhost ovn_controller[153771]: 2025-11-23T10:01:24Z|00192|binding|INFO|Setting lport 26b19460-c466-4094-bb15-ed2a3d1e848d down in Southbound Nov 23 05:01:24 localhost kernel: device tap26b19460-c4 left promiscuous mode Nov 23 05:01:24 localhost nova_compute[280939]: 2025-11-23 10:01:24.362 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:24 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:01:24 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:24.370 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-a5dafb9f-79ee-48c9-a407-ff6081d49752', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5dafb9f-79ee-48c9-a407-ff6081d49752', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f552ffbc49734cd69f687383dc092a2b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9579bb8c-2fff-4855-a0e2-47a12da6098f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=26b19460-c466-4094-bb15-ed2a3d1e848d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:01:24 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:24.372 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 26b19460-c466-4094-bb15-ed2a3d1e848d in datapath a5dafb9f-79ee-48c9-a407-ff6081d49752 unbound from our chassis#033[00m Nov 23 05:01:24 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:24.375 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a5dafb9f-79ee-48c9-a407-ff6081d49752, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:01:24 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:24.376 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[f75cd046-7aa3-4b01-bcce-7619d7414897]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:01:24 localhost nova_compute[280939]: 2025-11-23 10:01:24.382 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:24 localhost podman[315748]: 2025-11-23 10:01:24.398463132 +0000 UTC m=+0.189237088 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:01:24 localhost podman[315748]: 2025-11-23 10:01:24.409280726 +0000 UTC m=+0.200054692 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:01:24 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:01:24 localhost podman[315751]: 2025-11-23 10:01:24.450559659 +0000 UTC m=+0.233058561 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 23 05:01:24 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:01:25 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:25.109 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2 2001:db8::f816:3eff:feec:8f43'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:01:25 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:25.111 159415 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m Nov 23 05:01:25 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:25.114 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:01:25 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:25.115 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[8628edab-edef-4a82-b03e-8ce74125f78c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:01:25 localhost systemd[1]: tmp-crun.i9nzvi.mount: Deactivated successfully. Nov 23 05:01:25 localhost podman[315843]: 2025-11-23 10:01:25.3904843 +0000 UTC m=+0.064827230 container kill e97b6ec56ed7cc589e428cf54e3db3b37bd6520613ce8feeded84700642d8718 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0393bef-db47-4423-a9c1-5ac7043e4ec3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Nov 23 05:01:25 localhost dnsmasq[315089]: read /var/lib/neutron/dhcp/e0393bef-db47-4423-a9c1-5ac7043e4ec3/addn_hosts - 0 addresses Nov 23 05:01:25 localhost dnsmasq-dhcp[315089]: read /var/lib/neutron/dhcp/e0393bef-db47-4423-a9c1-5ac7043e4ec3/host Nov 23 05:01:25 localhost dnsmasq-dhcp[315089]: read /var/lib/neutron/dhcp/e0393bef-db47-4423-a9c1-5ac7043e4ec3/opts Nov 23 05:01:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v212: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 20 op/s Nov 23 05:01:25 localhost nova_compute[280939]: 2025-11-23 10:01:25.480 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:25 localhost nova_compute[280939]: 2025-11-23 10:01:25.603 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:25 localhost ovn_controller[153771]: 2025-11-23T10:01:25Z|00193|binding|INFO|Releasing lport 5931a702-9606-4ddd-aa2b-e6c777bc15ed from this chassis (sb_readonly=0) Nov 23 05:01:25 localhost kernel: device tap5931a702-96 left promiscuous mode Nov 23 05:01:25 localhost ovn_controller[153771]: 2025-11-23T10:01:25Z|00194|binding|INFO|Setting lport 5931a702-9606-4ddd-aa2b-e6c777bc15ed down in Southbound Nov 23 05:01:25 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:25.614 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.102.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-e0393bef-db47-4423-a9c1-5ac7043e4ec3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0393bef-db47-4423-a9c1-5ac7043e4ec3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79509bc833494f3598e01347dc55dea9', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c508bc5-7545-4483-b6f3-b3aaeedc94db, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5931a702-9606-4ddd-aa2b-e6c777bc15ed) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:01:25 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:25.616 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 5931a702-9606-4ddd-aa2b-e6c777bc15ed in datapath e0393bef-db47-4423-a9c1-5ac7043e4ec3 unbound from our chassis#033[00m Nov 23 05:01:25 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:25.619 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e0393bef-db47-4423-a9c1-5ac7043e4ec3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:01:25 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:25.620 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[aac51fb3-a649-418f-a0eb-c502e3890db3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:01:25 localhost nova_compute[280939]: 2025-11-23 10:01:25.628 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:25 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:25.876 2 INFO neutron.agent.securitygroups_rpc [None req-2cdcfc76-bcd0-4cc8-b710-4dffd4e83ae1 fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m Nov 23 05:01:26 localhost podman[315883]: 2025-11-23 10:01:26.47984162 +0000 UTC m=+0.056625127 container kill e97b6ec56ed7cc589e428cf54e3db3b37bd6520613ce8feeded84700642d8718 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0393bef-db47-4423-a9c1-5ac7043e4ec3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 05:01:26 localhost dnsmasq[315089]: exiting on receipt of SIGTERM Nov 23 05:01:26 localhost systemd[1]: libpod-e97b6ec56ed7cc589e428cf54e3db3b37bd6520613ce8feeded84700642d8718.scope: Deactivated successfully. Nov 23 05:01:26 localhost podman[315898]: 2025-11-23 10:01:26.549874679 +0000 UTC m=+0.054820391 container died e97b6ec56ed7cc589e428cf54e3db3b37bd6520613ce8feeded84700642d8718 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0393bef-db47-4423-a9c1-5ac7043e4ec3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:01:26 localhost podman[315898]: 2025-11-23 10:01:26.585282012 +0000 UTC m=+0.090227684 container cleanup e97b6ec56ed7cc589e428cf54e3db3b37bd6520613ce8feeded84700642d8718 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0393bef-db47-4423-a9c1-5ac7043e4ec3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:01:26 localhost systemd[1]: libpod-conmon-e97b6ec56ed7cc589e428cf54e3db3b37bd6520613ce8feeded84700642d8718.scope: Deactivated successfully. Nov 23 05:01:26 localhost podman[315899]: 2025-11-23 10:01:26.626888374 +0000 UTC m=+0.126714768 container remove e97b6ec56ed7cc589e428cf54e3db3b37bd6520613ce8feeded84700642d8718 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0393bef-db47-4423-a9c1-5ac7043e4ec3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:01:26 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:26.649 262301 INFO neutron.agent.dhcp.agent [None req-dce30995-6446-4e46-8459-6b3977cf5214 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:01:26 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:26.719 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:01:26 localhost nova_compute[280939]: 2025-11-23 10:01:26.941 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:26 localhost nova_compute[280939]: 2025-11-23 10:01:26.945 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:27 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:27.301 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:01:25Z, description=, device_id=2d611b4e-6541-4913-986b-396d76162510, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5ac6c23b-f306-4745-a305-a5cf5a58899d, ip_allocation=immediate, mac_address=fa:16:3e:ec:d2:b8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:01:10Z, description=, dns_domain=, id=b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestJSON-1535287174-network, port_security_enabled=True, project_id=c05c08b4d8794ff1b33e7233ec64d938, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58979, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1479, status=ACTIVE, subnets=['cdb49f60-f6ed-4acb-8e01-23df5c58f2e2'], tags=[], tenant_id=c05c08b4d8794ff1b33e7233ec64d938, updated_at=2025-11-23T10:01:12Z, vlan_transparent=None, network_id=b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e, port_security_enabled=False, project_id=c05c08b4d8794ff1b33e7233ec64d938, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1552, status=DOWN, tags=[], tenant_id=c05c08b4d8794ff1b33e7233ec64d938, updated_at=2025-11-23T10:01:26Z on network b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e#033[00m Nov 23 05:01:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v213: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s Nov 23 05:01:27 localhost systemd[1]: var-lib-containers-storage-overlay-a317098a3a677ef0882aa803700716f18b24019cbed1cb214fce8aec66641774-merged.mount: Deactivated successfully. Nov 23 05:01:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e97b6ec56ed7cc589e428cf54e3db3b37bd6520613ce8feeded84700642d8718-userdata-shm.mount: Deactivated successfully. Nov 23 05:01:27 localhost systemd[1]: run-netns-qdhcp\x2de0393bef\x2ddb47\x2d4423\x2da9c1\x2d5ac7043e4ec3.mount: Deactivated successfully. Nov 23 05:01:27 localhost dnsmasq[315616]: read /var/lib/neutron/dhcp/b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e/addn_hosts - 1 addresses Nov 23 05:01:27 localhost dnsmasq-dhcp[315616]: read /var/lib/neutron/dhcp/b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e/host Nov 23 05:01:27 localhost dnsmasq-dhcp[315616]: read /var/lib/neutron/dhcp/b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e/opts Nov 23 05:01:27 localhost podman[315942]: 2025-11-23 10:01:27.536758689 +0000 UTC m=+0.080126861 container kill ada970b224a0587689a6b9067fe2dab29d5c68433a827c2c89780dd607b700b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 05:01:27 localhost systemd[1]: tmp-crun.LVPdzp.mount: Deactivated successfully. Nov 23 05:01:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:01:27 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:27.750 262301 INFO neutron.agent.dhcp.agent [None req-b5ed1451-f131-420d-b94d-56875c565726 - - - - - -] DHCP configuration for ports {'5ac6c23b-f306-4745-a305-a5cf5a58899d'} is completed#033[00m Nov 23 05:01:28 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:28.338 2 INFO neutron.agent.securitygroups_rpc [None req-ac3982b0-5538-471d-bc5b-e6d4442ddedd fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m Nov 23 05:01:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v214: 177 pgs: 177 active+clean; 192 MiB data, 798 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 37 op/s Nov 23 05:01:29 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:29.652 2 INFO neutron.agent.securitygroups_rpc [None req-efb808b5-6542-451b-803e-d2906345bde2 fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m Nov 23 05:01:30 localhost nova_compute[280939]: 2025-11-23 10:01:30.525 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:30 localhost sshd[315964]: main: sshd: ssh-rsa algorithm is disabled Nov 23 05:01:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:01:31 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1011787999' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:01:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:01:31 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1011787999' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:01:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 192 MiB data, 798 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 37 op/s Nov 23 05:01:31 localhost nova_compute[280939]: 2025-11-23 10:01:31.944 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:01:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:01:32 localhost systemd[1]: tmp-crun.aEPtFw.mount: Deactivated successfully. Nov 23 05:01:32 localhost podman[315965]: 2025-11-23 10:01:32.898830124 +0000 UTC m=+0.080444691 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Nov 23 05:01:32 localhost podman[315965]: 2025-11-23 10:01:32.940284822 +0000 UTC m=+0.121899379 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9-minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, version=9.6) Nov 23 05:01:32 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:01:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v216: 177 pgs: 177 active+clean; 192 MiB data, 798 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 37 op/s Nov 23 05:01:33 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:33.733 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2 2001:db8::f816:3eff:feec:8f43'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:01:33 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:33.735 159415 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m Nov 23 05:01:33 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:33.738 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:01:33 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:33.738 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[f5abaf97-0260-4e7a-84f2-d0b2bad9155f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:01:34 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:34.666 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:01:25Z, description=, device_id=2d611b4e-6541-4913-986b-396d76162510, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5ac6c23b-f306-4745-a305-a5cf5a58899d, ip_allocation=immediate, mac_address=fa:16:3e:ec:d2:b8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:01:10Z, description=, dns_domain=, id=b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestJSON-1535287174-network, port_security_enabled=True, project_id=c05c08b4d8794ff1b33e7233ec64d938, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58979, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1479, status=ACTIVE, subnets=['cdb49f60-f6ed-4acb-8e01-23df5c58f2e2'], tags=[], tenant_id=c05c08b4d8794ff1b33e7233ec64d938, updated_at=2025-11-23T10:01:12Z, vlan_transparent=None, network_id=b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e, port_security_enabled=False, project_id=c05c08b4d8794ff1b33e7233ec64d938, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1552, status=DOWN, tags=[], tenant_id=c05c08b4d8794ff1b33e7233ec64d938, updated_at=2025-11-23T10:01:26Z on network b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e#033[00m Nov 23 05:01:34 localhost dnsmasq[315616]: read /var/lib/neutron/dhcp/b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e/addn_hosts - 1 addresses Nov 23 05:01:34 localhost dnsmasq-dhcp[315616]: read /var/lib/neutron/dhcp/b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e/host Nov 23 05:01:34 localhost podman[316002]: 2025-11-23 10:01:34.858078446 +0000 UTC m=+0.059946780 container kill ada970b224a0587689a6b9067fe2dab29d5c68433a827c2c89780dd607b700b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:01:34 localhost dnsmasq-dhcp[315616]: read /var/lib/neutron/dhcp/b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e/opts Nov 23 05:01:35 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:35.085 262301 INFO neutron.agent.dhcp.agent [None req-b62dbd86-6df6-41b6-bff0-1ebd2adab5db - - - - - -] DHCP configuration for ports {'5ac6c23b-f306-4745-a305-a5cf5a58899d'} is completed#033[00m Nov 23 05:01:35 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:35.207 2 INFO neutron.agent.securitygroups_rpc [None req-9e0f6898-e450-4b10-a9b6-2526799f670d fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m Nov 23 05:01:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v217: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 57 op/s Nov 23 05:01:35 localhost nova_compute[280939]: 2025-11-23 10:01:35.565 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:36 localhost dnsmasq[315375]: exiting on receipt of SIGTERM Nov 23 05:01:36 localhost systemd[1]: tmp-crun.gx2cJA.mount: Deactivated successfully. Nov 23 05:01:36 localhost podman[316038]: 2025-11-23 10:01:36.157769241 +0000 UTC m=+0.070676612 container kill 6af372410a6e69cbcc2f44ed005c2be0b99ce9fb6062b75a258aa737a8a8baeb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5dafb9f-79ee-48c9-a407-ff6081d49752, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2) Nov 23 05:01:36 localhost systemd[1]: libpod-6af372410a6e69cbcc2f44ed005c2be0b99ce9fb6062b75a258aa737a8a8baeb.scope: Deactivated successfully. Nov 23 05:01:36 localhost podman[316052]: 2025-11-23 10:01:36.234057073 +0000 UTC m=+0.058647580 container died 6af372410a6e69cbcc2f44ed005c2be0b99ce9fb6062b75a258aa737a8a8baeb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5dafb9f-79ee-48c9-a407-ff6081d49752, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118) Nov 23 05:01:36 localhost podman[316052]: 2025-11-23 10:01:36.278787572 +0000 UTC m=+0.103378019 container cleanup 6af372410a6e69cbcc2f44ed005c2be0b99ce9fb6062b75a258aa737a8a8baeb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5dafb9f-79ee-48c9-a407-ff6081d49752, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 05:01:36 localhost systemd[1]: libpod-conmon-6af372410a6e69cbcc2f44ed005c2be0b99ce9fb6062b75a258aa737a8a8baeb.scope: Deactivated successfully. Nov 23 05:01:36 localhost podman[316054]: 2025-11-23 10:01:36.316759442 +0000 UTC m=+0.132879038 container remove 6af372410a6e69cbcc2f44ed005c2be0b99ce9fb6062b75a258aa737a8a8baeb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5dafb9f-79ee-48c9-a407-ff6081d49752, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:01:36 localhost openstack_network_exporter[241732]: ERROR 10:01:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:01:36 localhost openstack_network_exporter[241732]: ERROR 10:01:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:01:36 localhost openstack_network_exporter[241732]: ERROR 10:01:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:01:36 localhost openstack_network_exporter[241732]: ERROR 10:01:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:01:36 localhost openstack_network_exporter[241732]: Nov 23 05:01:36 localhost openstack_network_exporter[241732]: ERROR 10:01:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:01:36 localhost openstack_network_exporter[241732]: Nov 23 05:01:36 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:36.885 262301 INFO neutron.agent.dhcp.agent [None req-9e987b00-c8f3-4738-9dd4-514dbbf7ce79 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:01:36 localhost nova_compute[280939]: 2025-11-23 10:01:36.948 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:01:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:01:37 localhost systemd[1]: tmp-crun.jWPIN6.mount: Deactivated successfully. Nov 23 05:01:37 localhost systemd[1]: var-lib-containers-storage-overlay-ff224528243f060a2343a1e09e777d34de65ff5333323ac25aed8e5dce1b4159-merged.mount: Deactivated successfully. Nov 23 05:01:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6af372410a6e69cbcc2f44ed005c2be0b99ce9fb6062b75a258aa737a8a8baeb-userdata-shm.mount: Deactivated successfully. Nov 23 05:01:37 localhost systemd[1]: run-netns-qdhcp\x2da5dafb9f\x2d79ee\x2d48c9\x2da407\x2dff6081d49752.mount: Deactivated successfully. Nov 23 05:01:37 localhost systemd[1]: tmp-crun.VHSEnx.mount: Deactivated successfully. Nov 23 05:01:37 localhost podman[316101]: 2025-11-23 10:01:37.174227362 +0000 UTC m=+0.099653304 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 05:01:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:37.181 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:01:37 localhost podman[316101]: 2025-11-23 10:01:37.20851903 +0000 UTC m=+0.133944982 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible) Nov 23 05:01:37 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:01:37 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:37.246 2 INFO neutron.agent.securitygroups_rpc [None req-0d1bae38-c3b0-4ed4-ba2a-9b0a25fae4ab 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:01:37 localhost podman[316100]: 2025-11-23 10:01:37.263564146 +0000 UTC m=+0.195240540 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 05:01:37 localhost podman[316100]: 2025-11-23 10:01:37.296307286 +0000 UTC m=+0.227983670 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 05:01:37 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:01:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v218: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 50 op/s Nov 23 05:01:37 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:37.495 2 INFO neutron.agent.securitygroups_rpc [None req-b60c8ac0-a755-4981-a84e-d95a726ca718 fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m Nov 23 05:01:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:01:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 05:01:37 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 05:01:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 05:01:37 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:01:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 05:01:38 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:01:38 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev a2b51553-df0e-4e84-a935-badc5d0f6147 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:01:38 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev a2b51553-df0e-4e84-a935-badc5d0f6147 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:01:38 localhost ceph-mgr[286671]: [progress INFO root] Completed event a2b51553-df0e-4e84-a935-badc5d0f6147 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 05:01:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 05:01:38 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 05:01:38 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 05:01:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 05:01:38 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:01:38 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:38.569 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:01:38 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:01:38 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:01:38 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:01:38 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:38.890 2 INFO neutron.agent.securitygroups_rpc [None req-e4114a4e-94f5-47df-bc1b-6dc35d238db6 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:01:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v219: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 50 op/s Nov 23 05:01:39 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:39.780 2 INFO neutron.agent.securitygroups_rpc [None req-3c7abe8e-704d-455a-8480-13cb405babe1 8d3ccb2bccdf4a12bb3b492992930601 6de614a4ddfd4f868264e9fc1dee856a - - default default] Security group member updated ['980b5aab-daf0-44c4-8e04-21e80ebf2d43']#033[00m Nov 23 05:01:39 localhost ovn_controller[153771]: 2025-11-23T10:01:39Z|00195|ovn_bfd|INFO|Enabled BFD on interface ovn-c237ed-0 Nov 23 05:01:39 localhost ovn_controller[153771]: 2025-11-23T10:01:39Z|00196|ovn_bfd|INFO|Enabled BFD on interface ovn-49b8a0-0 Nov 23 05:01:39 localhost ovn_controller[153771]: 2025-11-23T10:01:39Z|00197|ovn_bfd|INFO|Enabled BFD on interface ovn-b7d5b3-0 Nov 23 05:01:39 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:39.824 2 INFO neutron.agent.securitygroups_rpc [None req-3839d769-9cff-43fb-b0be-05026e050e30 fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m Nov 23 05:01:39 localhost nova_compute[280939]: 2025-11-23 10:01:39.837 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:39 localhost nova_compute[280939]: 2025-11-23 10:01:39.842 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:39 localhost nova_compute[280939]: 2025-11-23 10:01:39.871 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:39 localhost nova_compute[280939]: 2025-11-23 10:01:39.941 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:39 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:39.998 2 INFO neutron.agent.securitygroups_rpc [None req-3c7abe8e-704d-455a-8480-13cb405babe1 8d3ccb2bccdf4a12bb3b492992930601 6de614a4ddfd4f868264e9fc1dee856a - - default default] Security group member updated ['980b5aab-daf0-44c4-8e04-21e80ebf2d43']#033[00m Nov 23 05:01:40 localhost nova_compute[280939]: 2025-11-23 10:01:40.567 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:40 localhost nova_compute[280939]: 2025-11-23 10:01:40.773 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:41 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:41.011 2 INFO neutron.agent.securitygroups_rpc [None req-7b7ae10d-5331-41d7-96b9-7aafc7180edd 8d3ccb2bccdf4a12bb3b492992930601 6de614a4ddfd4f868264e9fc1dee856a - - default default] Security group member updated ['980b5aab-daf0-44c4-8e04-21e80ebf2d43']#033[00m Nov 23 05:01:41 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:41.032 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:01:41 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:41.353 2 INFO neutron.agent.securitygroups_rpc [None req-9181f901-83e9-4198-8a73-d33dcd9ef0fc 8d3ccb2bccdf4a12bb3b492992930601 6de614a4ddfd4f868264e9fc1dee856a - - default default] Security group member updated ['980b5aab-daf0-44c4-8e04-21e80ebf2d43']#033[00m Nov 23 05:01:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v220: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 767 B/s wr, 19 op/s Nov 23 05:01:41 localhost nova_compute[280939]: 2025-11-23 10:01:41.567 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:41 localhost nova_compute[280939]: 2025-11-23 10:01:41.592 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:41 localhost nova_compute[280939]: 2025-11-23 10:01:41.650 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:41 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:41.902 2 INFO neutron.agent.securitygroups_rpc [None req-7527e00c-fa2d-4f7e-82d1-1aa51478130d fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m Nov 23 05:01:41 localhost nova_compute[280939]: 2025-11-23 10:01:41.949 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:42 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Nov 23 05:01:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:01:42 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:42.735 2 INFO neutron.agent.securitygroups_rpc [None req-02caec08-4fe5-4eae-8f59-0f41df22086a fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m Nov 23 05:01:43 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:43.303 2 INFO neutron.agent.securitygroups_rpc [None req-7ad0c56d-3950-498c-8f5a-35533112ee18 a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m Nov 23 05:01:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v221: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 767 B/s wr, 19 op/s Nov 23 05:01:43 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:43.657 2 INFO neutron.agent.securitygroups_rpc [None req-6a3f8f10-e09b-4d25-8141-9c205b1a054c fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m Nov 23 05:01:44 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:44.225 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 2001:db8::f816:3eff:feec:8f43'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2 2001:db8::f816:3eff:feec:8f43'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:01:44 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:44.227 159415 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m Nov 23 05:01:44 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:44.230 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:01:44 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:44.231 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[dffa7d49-2218-4a8a-90cc-9645f7f995d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:01:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:01:44 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2622836700' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:01:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:01:44 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2622836700' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:01:45 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:45.182 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:01:45 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:45.183 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 05:01:45 localhost nova_compute[280939]: 2025-11-23 10:01:45.233 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:45 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:45.335 2 INFO neutron.agent.securitygroups_rpc [None req-13ffd066-3677-4907-b0b4-8f11a42c5f7c a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m Nov 23 05:01:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v222: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 1.3 KiB/s wr, 34 op/s Nov 23 05:01:45 localhost nova_compute[280939]: 2025-11-23 10:01:45.569 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:45 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:45.916 2 INFO neutron.agent.securitygroups_rpc [None req-b9b41fb6-e2e7-4179-8f08-7b97a3dfa0c7 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:01:46 localhost dnsmasq[315616]: read /var/lib/neutron/dhcp/b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e/addn_hosts - 0 addresses Nov 23 05:01:46 localhost dnsmasq-dhcp[315616]: read /var/lib/neutron/dhcp/b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e/host Nov 23 05:01:46 localhost podman[316226]: 2025-11-23 10:01:46.284581422 +0000 UTC m=+0.061714684 container kill ada970b224a0587689a6b9067fe2dab29d5c68433a827c2c89780dd607b700b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:01:46 localhost dnsmasq-dhcp[315616]: read /var/lib/neutron/dhcp/b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e/opts Nov 23 05:01:46 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:46.289 2 INFO neutron.agent.securitygroups_rpc [None req-f7529e1b-3bf3-41ad-a49e-e39cf58ffefa ca36e3c530cd4996add76add048683eb 461e34582027490ebd34279a384a57b1 - - default default] Security group rule updated ['ce47e028-f950-480c-a113-98c15c008254']#033[00m Nov 23 05:01:46 localhost ovn_controller[153771]: 2025-11-23T10:01:46Z|00198|ovn_bfd|INFO|Disabled BFD on interface ovn-c237ed-0 Nov 23 05:01:46 localhost ovn_controller[153771]: 2025-11-23T10:01:46Z|00199|ovn_bfd|INFO|Disabled BFD on interface ovn-49b8a0-0 Nov 23 05:01:46 localhost ovn_controller[153771]: 2025-11-23T10:01:46Z|00200|ovn_bfd|INFO|Disabled BFD on interface ovn-b7d5b3-0 Nov 23 05:01:46 localhost nova_compute[280939]: 2025-11-23 10:01:46.408 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:46 localhost nova_compute[280939]: 2025-11-23 10:01:46.410 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:46 localhost nova_compute[280939]: 2025-11-23 10:01:46.427 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:46 localhost nova_compute[280939]: 2025-11-23 10:01:46.628 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:46 localhost ovn_controller[153771]: 2025-11-23T10:01:46Z|00201|binding|INFO|Releasing lport 19441b64-7478-4dca-a309-b5edfa93c3de from this chassis (sb_readonly=0) Nov 23 05:01:46 localhost kernel: device tap19441b64-74 left promiscuous mode Nov 23 05:01:46 localhost ovn_controller[153771]: 2025-11-23T10:01:46Z|00202|binding|INFO|Setting lport 19441b64-7478-4dca-a309-b5edfa93c3de down in Southbound Nov 23 05:01:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:46.637 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c05c08b4d8794ff1b33e7233ec64d938', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb7bb7f2-4791-41f8-bcdb-c6045c345937, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=19441b64-7478-4dca-a309-b5edfa93c3de) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:01:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:46.639 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 19441b64-7478-4dca-a309-b5edfa93c3de in datapath b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e unbound from our chassis#033[00m Nov 23 05:01:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:46.642 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:01:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:46.643 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[0501db33-f815-4aa7-84b3-955eb7de1c51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:01:46 localhost nova_compute[280939]: 2025-11-23 10:01:46.651 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:46 localhost nova_compute[280939]: 2025-11-23 10:01:46.652 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:46 localhost nova_compute[280939]: 2025-11-23 10:01:46.953 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:47 localhost podman[239764]: time="2025-11-23T10:01:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:01:47 localhost podman[239764]: @ - - [23/Nov/2025:10:01:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156323 "" "Go-http-client/1.1" Nov 23 05:01:47 localhost podman[239764]: @ - - [23/Nov/2025:10:01:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19214 "" "Go-http-client/1.1" Nov 23 05:01:47 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:47.251 2 INFO neutron.agent.securitygroups_rpc [None req-cfd83c6e-245c-4505-a8d2-3c8b7de44cbd 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:01:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v223: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Nov 23 05:01:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0. Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:47.604050) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49 Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892107604122, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 506, "num_deletes": 257, "total_data_size": 277228, "memory_usage": 287672, "flush_reason": "Manual Compaction"} Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892107608406, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 271753, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27893, "largest_seqno": 28397, "table_properties": {"data_size": 269139, "index_size": 661, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6592, "raw_average_key_size": 18, "raw_value_size": 263608, "raw_average_value_size": 740, "num_data_blocks": 30, "num_entries": 356, "num_filter_entries": 356, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892084, "oldest_key_time": 1763892084, "file_creation_time": 1763892107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}} Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 4398 microseconds, and 1607 cpu microseconds. Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:47.608450) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 271753 bytes OK Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:47.608471) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:47.610664) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:47.610685) EVENT_LOG_v1 {"time_micros": 1763892107610679, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:47.610705) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 274281, prev total WAL file size 274605, number of live WAL files 2. Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:47.611582) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303234' seq:72057594037927935, type:22 .. '6C6F676D0034323737' seq:0, type:0; will stop at (end) Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(265KB)], [48(15MB)] Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892107611624, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 16350485, "oldest_snapshot_seqno": -1} Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 12420 keys, 16030627 bytes, temperature: kUnknown Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892107692261, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 16030627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15959863, "index_size": 38599, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 335984, "raw_average_key_size": 27, "raw_value_size": 15748095, "raw_average_value_size": 1267, "num_data_blocks": 1448, "num_entries": 12420, "num_filter_entries": 12420, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763892107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}} Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:47.692571) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 16030627 bytes Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:47.694295) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.5 rd, 198.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 15.3 +0.0 blob) out(15.3 +0.0 blob), read-write-amplify(119.2) write-amplify(59.0) OK, records in: 12950, records dropped: 530 output_compression: NoCompression Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:47.694324) EVENT_LOG_v1 {"time_micros": 1763892107694312, "job": 28, "event": "compaction_finished", "compaction_time_micros": 80753, "compaction_time_cpu_micros": 48098, "output_level": 6, "num_output_files": 1, "total_output_size": 16030627, "num_input_records": 12950, "num_output_records": 12420, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892107694585, "job": 28, "event": "table_file_deletion", "file_number": 50} Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892107696844, "job": 28, "event": "table_file_deletion", "file_number": 48} Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:47.611482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:47.696954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:47.696960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:47.696964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:47.696967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:01:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:01:47.696972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:01:48 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:48.185 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 05:01:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:01:48 localhost podman[316250]: 2025-11-23 10:01:48.899375228 +0000 UTC m=+0.084232008 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:01:48 localhost podman[316250]: 2025-11-23 10:01:48.904432284 +0000 UTC m=+0.089289044 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Nov 23 05:01:48 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:01:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v224: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s Nov 23 05:01:50 localhost nova_compute[280939]: 2025-11-23 10:01:50.614 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:51 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:51.007 2 INFO neutron.agent.securitygroups_rpc [None req-fcaf6d85-3067-425b-90fc-65fb17c22c5c 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:01:51 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:51.159 262301 INFO neutron.agent.linux.ip_lib [None req-dae0ab02-daaa-4133-9f2c-c327dc66b3d6 - - - - - -] Device tapa6f5f4d7-ad cannot be used as it has no MAC address#033[00m Nov 23 05:01:51 localhost nova_compute[280939]: 2025-11-23 10:01:51.180 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:51 localhost kernel: device tapa6f5f4d7-ad entered promiscuous mode Nov 23 05:01:51 localhost nova_compute[280939]: 2025-11-23 10:01:51.189 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:51 localhost ovn_controller[153771]: 2025-11-23T10:01:51Z|00203|binding|INFO|Claiming lport a6f5f4d7-adfa-4712-9dae-1a1d5113dae2 for this chassis. Nov 23 05:01:51 localhost ovn_controller[153771]: 2025-11-23T10:01:51Z|00204|binding|INFO|a6f5f4d7-adfa-4712-9dae-1a1d5113dae2: Claiming unknown Nov 23 05:01:51 localhost NetworkManager[5966]: [1763892111.1912] manager: (tapa6f5f4d7-ad): new Generic device (/org/freedesktop/NetworkManager/Devices/34) Nov 23 05:01:51 localhost systemd-udevd[316278]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:01:51 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:51.214 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-7cde8cc3-d463-48ac-af00-6f186ef6ec86', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7cde8cc3-d463-48ac-af00-6f186ef6ec86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6de614a4ddfd4f868264e9fc1dee856a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab606644-22b5-478f-b56a-7788efe515bb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a6f5f4d7-adfa-4712-9dae-1a1d5113dae2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:01:51 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:51.216 159415 INFO neutron.agent.ovn.metadata.agent [-] Port a6f5f4d7-adfa-4712-9dae-1a1d5113dae2 in datapath 7cde8cc3-d463-48ac-af00-6f186ef6ec86 bound to our chassis#033[00m Nov 23 05:01:51 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:51.218 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7cde8cc3-d463-48ac-af00-6f186ef6ec86 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:01:51 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:51.219 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[ab077354-117c-4ae2-98f2-b890b9982dec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:01:51 localhost journal[229336]: ethtool ioctl error on tapa6f5f4d7-ad: No such device Nov 23 05:01:51 localhost journal[229336]: ethtool ioctl error on tapa6f5f4d7-ad: No such device Nov 23 05:01:51 localhost ovn_controller[153771]: 2025-11-23T10:01:51Z|00205|binding|INFO|Setting lport a6f5f4d7-adfa-4712-9dae-1a1d5113dae2 ovn-installed in OVS Nov 23 05:01:51 localhost journal[229336]: ethtool ioctl error on tapa6f5f4d7-ad: No such device Nov 23 05:01:51 localhost ovn_controller[153771]: 2025-11-23T10:01:51Z|00206|binding|INFO|Setting lport a6f5f4d7-adfa-4712-9dae-1a1d5113dae2 up in Southbound Nov 23 05:01:51 localhost nova_compute[280939]: 2025-11-23 10:01:51.229 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:51 localhost nova_compute[280939]: 2025-11-23 10:01:51.232 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:51 localhost journal[229336]: ethtool ioctl error on tapa6f5f4d7-ad: No such device Nov 23 05:01:51 localhost journal[229336]: ethtool ioctl error on tapa6f5f4d7-ad: No such device Nov 23 05:01:51 localhost journal[229336]: ethtool ioctl error on tapa6f5f4d7-ad: No such device Nov 23 05:01:51 localhost journal[229336]: ethtool ioctl error on tapa6f5f4d7-ad: No such device Nov 23 05:01:51 localhost journal[229336]: ethtool ioctl error on tapa6f5f4d7-ad: No such device Nov 23 05:01:51 localhost nova_compute[280939]: 2025-11-23 10:01:51.268 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:51 localhost nova_compute[280939]: 2025-11-23 10:01:51.297 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v225: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s Nov 23 05:01:51 localhost nova_compute[280939]: 2025-11-23 10:01:51.991 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:52 localhost podman[316349]: Nov 23 05:01:52 localhost podman[316349]: 2025-11-23 10:01:52.122404996 +0000 UTC m=+0.089426588 container create 324d21517edf6f3a6802f0d9d141c8cf052e23ffda1cf3c6376d1756034840a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7cde8cc3-d463-48ac-af00-6f186ef6ec86, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:01:52 localhost systemd[1]: Started libpod-conmon-324d21517edf6f3a6802f0d9d141c8cf052e23ffda1cf3c6376d1756034840a1.scope. Nov 23 05:01:52 localhost podman[316349]: 2025-11-23 10:01:52.076040657 +0000 UTC m=+0.043062279 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:01:52 localhost systemd[1]: Started libcrun container. Nov 23 05:01:52 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:52.181 2 INFO neutron.agent.securitygroups_rpc [None req-b422b6dc-3b22-4323-bd62-6ed72320b39a 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:01:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6ea51e18713b2aadf2caa4b1e2f4905402e42bcb4c83fb3e2e969e5f3c97d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:01:52 localhost podman[316349]: 2025-11-23 10:01:52.191803966 +0000 UTC m=+0.158825588 container init 324d21517edf6f3a6802f0d9d141c8cf052e23ffda1cf3c6376d1756034840a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7cde8cc3-d463-48ac-af00-6f186ef6ec86, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 23 05:01:52 localhost podman[316349]: 2025-11-23 10:01:52.201273249 +0000 UTC m=+0.168294881 container start 324d21517edf6f3a6802f0d9d141c8cf052e23ffda1cf3c6376d1756034840a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7cde8cc3-d463-48ac-af00-6f186ef6ec86, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 05:01:52 localhost dnsmasq[316367]: started, version 2.85 cachesize 150 Nov 23 05:01:52 localhost dnsmasq[316367]: DNS service limited to local subnets Nov 23 05:01:52 localhost dnsmasq[316367]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:01:52 localhost dnsmasq[316367]: warning: no upstream servers configured Nov 23 05:01:52 localhost dnsmasq-dhcp[316367]: DHCP, static leases only on 10.100.0.16, lease time 1d Nov 23 05:01:52 localhost dnsmasq[316367]: read /var/lib/neutron/dhcp/7cde8cc3-d463-48ac-af00-6f186ef6ec86/addn_hosts - 0 addresses Nov 23 05:01:52 localhost dnsmasq-dhcp[316367]: read /var/lib/neutron/dhcp/7cde8cc3-d463-48ac-af00-6f186ef6ec86/host Nov 23 05:01:52 localhost dnsmasq-dhcp[316367]: read /var/lib/neutron/dhcp/7cde8cc3-d463-48ac-af00-6f186ef6ec86/opts Nov 23 05:01:52 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:52.528 262301 INFO neutron.agent.dhcp.agent [None req-dab3e392-c7b1-41d7-bc37-b607370abd27 - - - - - -] DHCP configuration for ports {'fd23a354-63eb-4ed5-bad3-8d98cdb65ceb'} is completed#033[00m Nov 23 05:01:52 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:01:52 localhost dnsmasq[315616]: exiting on receipt of SIGTERM Nov 23 05:01:52 localhost podman[316386]: 2025-11-23 10:01:52.886186678 +0000 UTC m=+0.062769376 container kill ada970b224a0587689a6b9067fe2dab29d5c68433a827c2c89780dd607b700b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 05:01:52 localhost systemd[1]: libpod-ada970b224a0587689a6b9067fe2dab29d5c68433a827c2c89780dd607b700b8.scope: Deactivated successfully. Nov 23 05:01:52 localhost podman[316401]: 2025-11-23 10:01:52.951402818 +0000 UTC m=+0.053971495 container died ada970b224a0587689a6b9067fe2dab29d5c68433a827c2c89780dd607b700b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 23 05:01:52 localhost podman[316401]: 2025-11-23 10:01:52.978014099 +0000 UTC m=+0.080582736 container cleanup ada970b224a0587689a6b9067fe2dab29d5c68433a827c2c89780dd607b700b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 05:01:52 localhost systemd[1]: libpod-conmon-ada970b224a0587689a6b9067fe2dab29d5c68433a827c2c89780dd607b700b8.scope: Deactivated successfully. Nov 23 05:01:53 localhost podman[316408]: 2025-11-23 10:01:53.031942492 +0000 UTC m=+0.120541368 container remove ada970b224a0587689a6b9067fe2dab29d5c68433a827c2c89780dd607b700b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b90f5d43-bdfa-4d02-95fe-ef1d0237fe1e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true) Nov 23 05:01:53 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:53.060 262301 INFO neutron.agent.dhcp.agent [None req-fa5782e8-2848-4388-9447-566cc1f0a52c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:01:53 localhost systemd[1]: var-lib-containers-storage-overlay-31e42cbed5fb97c672d5a9c663ea68e2e3bb925e6a2acd1f1331d2d18766e9dc-merged.mount: Deactivated successfully. Nov 23 05:01:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ada970b224a0587689a6b9067fe2dab29d5c68433a827c2c89780dd607b700b8-userdata-shm.mount: Deactivated successfully. Nov 23 05:01:53 localhost systemd[1]: run-netns-qdhcp\x2db90f5d43\x2dbdfa\x2d4d02\x2d95fe\x2def1d0237fe1e.mount: Deactivated successfully. Nov 23 05:01:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:01:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:01:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:01:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:01:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:01:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:01:53 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:53.400 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:01:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v226: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s Nov 23 05:01:53 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:53.538 2 INFO neutron.agent.securitygroups_rpc [None req-76b8a8df-4b24-4290-be38-6011d037c5af a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m Nov 23 05:01:53 localhost nova_compute[280939]: 2025-11-23 10:01:53.766 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:53 localhost dnsmasq[316367]: exiting on receipt of SIGTERM Nov 23 05:01:53 localhost podman[316446]: 2025-11-23 10:01:53.894158698 +0000 UTC m=+0.064624494 container kill 324d21517edf6f3a6802f0d9d141c8cf052e23ffda1cf3c6376d1756034840a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7cde8cc3-d463-48ac-af00-6f186ef6ec86, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS) Nov 23 05:01:53 localhost systemd[1]: libpod-324d21517edf6f3a6802f0d9d141c8cf052e23ffda1cf3c6376d1756034840a1.scope: Deactivated successfully. Nov 23 05:01:53 localhost podman[316459]: 2025-11-23 10:01:53.961469193 +0000 UTC m=+0.057760682 container died 324d21517edf6f3a6802f0d9d141c8cf052e23ffda1cf3c6376d1756034840a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7cde8cc3-d463-48ac-af00-6f186ef6ec86, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:01:53 localhost podman[316459]: 2025-11-23 10:01:53.993560632 +0000 UTC m=+0.089852061 container cleanup 324d21517edf6f3a6802f0d9d141c8cf052e23ffda1cf3c6376d1756034840a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7cde8cc3-d463-48ac-af00-6f186ef6ec86, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118) Nov 23 05:01:53 localhost systemd[1]: libpod-conmon-324d21517edf6f3a6802f0d9d141c8cf052e23ffda1cf3c6376d1756034840a1.scope: Deactivated successfully. Nov 23 05:01:54 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:54.012 2 INFO neutron.agent.securitygroups_rpc [None req-988e37e0-0049-4c8c-be99-30016558f502 a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m Nov 23 05:01:54 localhost podman[316466]: 2025-11-23 10:01:54.064226292 +0000 UTC m=+0.143567508 container remove 324d21517edf6f3a6802f0d9d141c8cf052e23ffda1cf3c6376d1756034840a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7cde8cc3-d463-48ac-af00-6f186ef6ec86, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 05:01:54 localhost nova_compute[280939]: 2025-11-23 10:01:54.076 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:54 localhost ovn_controller[153771]: 2025-11-23T10:01:54Z|00207|binding|INFO|Releasing lport a6f5f4d7-adfa-4712-9dae-1a1d5113dae2 from this chassis (sb_readonly=0) Nov 23 05:01:54 localhost ovn_controller[153771]: 2025-11-23T10:01:54Z|00208|binding|INFO|Setting lport a6f5f4d7-adfa-4712-9dae-1a1d5113dae2 down in Southbound Nov 23 05:01:54 localhost kernel: device tapa6f5f4d7-ad left promiscuous mode Nov 23 05:01:54 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:54.086 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-7cde8cc3-d463-48ac-af00-6f186ef6ec86', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7cde8cc3-d463-48ac-af00-6f186ef6ec86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6de614a4ddfd4f868264e9fc1dee856a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab606644-22b5-478f-b56a-7788efe515bb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a6f5f4d7-adfa-4712-9dae-1a1d5113dae2) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:01:54 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:54.087 159415 INFO neutron.agent.ovn.metadata.agent [-] Port a6f5f4d7-adfa-4712-9dae-1a1d5113dae2 in datapath 7cde8cc3-d463-48ac-af00-6f186ef6ec86 unbound from our chassis#033[00m Nov 23 05:01:54 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:54.090 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7cde8cc3-d463-48ac-af00-6f186ef6ec86, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:01:54 localhost ovn_metadata_agent[159410]: 2025-11-23 10:01:54.090 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[56ce3b49-e03a-4d04-9d54-d2dc3261d50e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:01:54 localhost nova_compute[280939]: 2025-11-23 10:01:54.104 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:54 localhost systemd[1]: var-lib-containers-storage-overlay-ea6ea51e18713b2aadf2caa4b1e2f4905402e42bcb4c83fb3e2e969e5f3c97d6-merged.mount: Deactivated successfully. Nov 23 05:01:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-324d21517edf6f3a6802f0d9d141c8cf052e23ffda1cf3c6376d1756034840a1-userdata-shm.mount: Deactivated successfully. Nov 23 05:01:54 localhost systemd[1]: run-netns-qdhcp\x2d7cde8cc3\x2dd463\x2d48ac\x2daf00\x2d6f186ef6ec86.mount: Deactivated successfully. Nov 23 05:01:54 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:54.276 262301 INFO neutron.agent.dhcp.agent [None req-d6bd48af-affd-4006-972d-dc91e6e7c44c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:01:54 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:54.607 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:01:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:01:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:01:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:01:54 localhost podman[316490]: 2025-11-23 10:01:54.884012609 +0000 UTC m=+0.071889277 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:01:54 localhost systemd[1]: tmp-crun.Uee9Wx.mount: Deactivated successfully. Nov 23 05:01:54 localhost podman[316488]: 2025-11-23 10:01:54.90185251 +0000 UTC m=+0.086726676 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 05:01:54 localhost podman[316488]: 2025-11-23 10:01:54.958446544 +0000 UTC m=+0.143320720 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 23 05:01:54 localhost podman[316489]: 2025-11-23 10:01:54.967098511 +0000 UTC m=+0.153054001 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 05:01:54 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:01:54 localhost podman[316489]: 2025-11-23 10:01:54.999478279 +0000 UTC m=+0.185433739 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 05:01:55 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:01:55 localhost podman[316490]: 2025-11-23 10:01:55.050544974 +0000 UTC m=+0.238421652 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 05:01:55 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:01:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v227: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s Nov 23 05:01:55 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:55.418 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:01:55 localhost nova_compute[280939]: 2025-11-23 10:01:55.616 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:55 localhost nova_compute[280939]: 2025-11-23 10:01:55.949 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:56 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:56.528 2 INFO neutron.agent.securitygroups_rpc [None req-dca3126a-73f1-4b2f-a026-193f4196c9b1 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:01:56 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:56.961 2 INFO neutron.agent.securitygroups_rpc [None req-7d98cf13-9412-4d6e-887c-62597fa6d091 2a1728c536894f859fec3b140f01d4cc fb9791d358174b77957d83c427c41282 - - default default] Security group member updated ['c43d5a51-ed13-44c5-81c1-fc0ae615d5d7']#033[00m Nov 23 05:01:57 localhost nova_compute[280939]: 2025-11-23 10:01:57.041 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:01:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v228: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Nov 23 05:01:57 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:57.417 2 INFO neutron.agent.securitygroups_rpc [None req-bcaf3ac0-9e11-402c-a86f-10a7b58b202d 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:01:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:01:57 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:57.851 2 INFO neutron.agent.securitygroups_rpc [None req-640caa93-9dd6-4d73-895a-6c48cb53e831 2a1728c536894f859fec3b140f01d4cc fb9791d358174b77957d83c427c41282 - - default default] Security group member updated ['c43d5a51-ed13-44c5-81c1-fc0ae615d5d7']#033[00m Nov 23 05:01:58 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:01:58.161 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:01:58 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:58.477 2 INFO neutron.agent.securitygroups_rpc [None req-00590570-c613-4100-977f-94a0fcdda7a2 2a1728c536894f859fec3b140f01d4cc fb9791d358174b77957d83c427c41282 - - default default] Security group member updated ['c43d5a51-ed13-44c5-81c1-fc0ae615d5d7']#033[00m Nov 23 05:01:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v229: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Nov 23 05:01:59 localhost neutron_sriov_agent[255165]: 2025-11-23 10:01:59.657 2 INFO neutron.agent.securitygroups_rpc [None req-a72c44cb-60bc-4c8a-a77d-8d03ce0529ac 2a1728c536894f859fec3b140f01d4cc fb9791d358174b77957d83c427c41282 - - default default] Security group member updated ['c43d5a51-ed13-44c5-81c1-fc0ae615d5d7']#033[00m Nov 23 05:02:00 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:00.282 2 INFO neutron.agent.securitygroups_rpc [None req-b6b7fea1-91f0-4eed-a7d1-b3a653a6eb31 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:02:00 localhost nova_compute[280939]: 2025-11-23 10:02:00.666 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:01 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:01.025 2 INFO neutron.agent.securitygroups_rpc [None req-632d2016-3c1b-4f91-9e80-c737b7a909f7 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:02:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v230: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail Nov 23 05:02:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 05:02:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 9282 writes, 37K keys, 9282 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 9282 writes, 2407 syncs, 3.86 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3789 writes, 13K keys, 3789 commit groups, 1.0 writes per commit group, ingest: 14.72 MB, 0.02 MB/s#012Interval WAL: 3789 writes, 1611 syncs, 2.35 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 05:02:02 localhost nova_compute[280939]: 2025-11-23 10:02:02.077 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:02 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:02.376 2 INFO neutron.agent.securitygroups_rpc [None req-daacf7a5-9b59-487b-876d-20ffffe4895d 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:02:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:02:02 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:02.808 2 INFO neutron.agent.securitygroups_rpc [None req-7b96091d-4d68-47d5-ad40-bc5e46f787b5 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:02:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v231: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail Nov 23 05:02:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:02:03 localhost systemd[1]: tmp-crun.hmZ6Qr.mount: Deactivated successfully. Nov 23 05:02:03 localhost podman[316550]: 2025-11-23 10:02:03.900627302 +0000 UTC m=+0.087770228 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, release=1755695350, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.6, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter) Nov 23 05:02:03 localhost podman[316550]: 2025-11-23 10:02:03.91257233 +0000 UTC m=+0.099715206 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, managed_by=edpm_ansible, release=1755695350, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.) Nov 23 05:02:03 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:02:04 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:04.475 2 INFO neutron.agent.securitygroups_rpc [None req-69ba403f-ba50-44e1-b8d0-5abce2396fa5 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v232: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 938 B/s wr, 15 op/s Nov 23 05:02:05 localhost nova_compute[280939]: 2025-11-23 10:02:05.697 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 05:02:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 8401 writes, 34K keys, 8401 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 8401 writes, 2044 syncs, 4.11 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3043 writes, 10K keys, 3043 commit groups, 1.0 writes per commit group, ingest: 10.72 MB, 0.02 MB/s#012Interval WAL: 3043 writes, 1315 syncs, 2.31 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 05:02:06 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:06.459 2 INFO neutron.agent.securitygroups_rpc [None req-027c2ba7-b3e9-44a3-b67a-0b2bdc3ad43a 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:06 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:06.593 2 INFO neutron.agent.securitygroups_rpc [None req-027c2ba7-b3e9-44a3-b67a-0b2bdc3ad43a 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:06 localhost openstack_network_exporter[241732]: ERROR 10:02:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:02:06 localhost openstack_network_exporter[241732]: ERROR 10:02:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:02:06 localhost openstack_network_exporter[241732]: ERROR 10:02:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:02:06 localhost openstack_network_exporter[241732]: ERROR 10:02:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:02:06 localhost openstack_network_exporter[241732]: Nov 23 05:02:06 localhost openstack_network_exporter[241732]: ERROR 10:02:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:02:06 localhost openstack_network_exporter[241732]: Nov 23 05:02:07 localhost nova_compute[280939]: 2025-11-23 10:02:07.079 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:07 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:07.095 2 INFO neutron.agent.securitygroups_rpc [None req-ed1fae3b-c78a-4940-9647-1237b664bff1 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:07 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:07.195 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:07 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:07.250 2 INFO neutron.agent.securitygroups_rpc [None req-57d318cc-de08-4b5e-a1a2-97e0506818c3 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:02:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v233: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 938 B/s wr, 15 op/s Nov 23 05:02:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:02:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:02:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:02:07 localhost podman[316571]: 2025-11-23 10:02:07.904410865 +0000 UTC m=+0.081606207 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:02:07 localhost podman[316571]: 2025-11-23 10:02:07.914974101 +0000 UTC m=+0.092169493 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Nov 23 05:02:07 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:02:07 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:07.979 2 INFO neutron.agent.securitygroups_rpc [None req-fccb08e0-0b04-4e9e-9f46-388ae08a5315 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:08 localhost systemd[1]: tmp-crun.qlyil7.mount: Deactivated successfully. Nov 23 05:02:08 localhost podman[316572]: 2025-11-23 10:02:08.018656898 +0000 UTC m=+0.192353463 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:02:08 localhost podman[316572]: 2025-11-23 10:02:08.030369469 +0000 UTC m=+0.204066054 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:02:08 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:02:08 localhost nova_compute[280939]: 2025-11-23 10:02:08.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:02:08 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:08.223 2 INFO neutron.agent.securitygroups_rpc [None req-ccd00d64-5365-45b4-a98a-49c5095f8557 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:02:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e115 do_prune osdmap full prune enabled Nov 23 05:02:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e116 e116: 6 total, 6 up, 6 in Nov 23 05:02:08 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e116: 6 total, 6 up, 6 in Nov 23 05:02:09 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:09.132 2 INFO neutron.agent.securitygroups_rpc [None req-4a4cd2de-ce5b-469c-97bb-90c17373d140 268d02d4288b4ff3a9dab77419bcf96a 23ffb5a89d5d4d8a8900ea750309030f - - default default] Security group member updated ['8eb14703-b106-4f91-b864-8b16a806bee3']#033[00m Nov 23 05:02:09 localhost nova_compute[280939]: 2025-11-23 10:02:09.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:02:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v235: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 3.0 KiB/s wr, 49 op/s Nov 23 05:02:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:09.744 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:02:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:09.744 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:02:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:09.745 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:02:10 localhost nova_compute[280939]: 2025-11-23 10:02:10.730 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v236: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 3.0 KiB/s wr, 49 op/s Nov 23 05:02:11 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:11.775 2 INFO neutron.agent.securitygroups_rpc [None req-714530e0-ede7-43a6-b513-9acc4fe9127a 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:12 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:12.353 2 INFO neutron.agent.securitygroups_rpc [None req-c5ba658c-cffc-41f4-af3e-933fb394a1b2 404dc86aa2d547d1b035a15b21cd31a6 3bcc515473444ea195be635c77c65d0f - - default default] Security group member updated ['742a2d87-87e7-4137-bfc5-9bbcf8237faa']#033[00m Nov 23 05:02:12 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:12.535 2 INFO neutron.agent.securitygroups_rpc [None req-c5ba658c-cffc-41f4-af3e-933fb394a1b2 404dc86aa2d547d1b035a15b21cd31a6 3bcc515473444ea195be635c77c65d0f - - default default] Security group member updated ['742a2d87-87e7-4137-bfc5-9bbcf8237faa']#033[00m Nov 23 05:02:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:02:12 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:12.691 2 INFO neutron.agent.securitygroups_rpc [None req-972382fb-338a-4217-8905-e67aa90103fe 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:12 localhost nova_compute[280939]: 2025-11-23 10:02:12.895 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:12 localhost nova_compute[280939]: 2025-11-23 10:02:12.897 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:02:12 localhost nova_compute[280939]: 2025-11-23 10:02:12.898 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 05:02:12 localhost nova_compute[280939]: 2025-11-23 10:02:12.898 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 05:02:12 localhost nova_compute[280939]: 2025-11-23 10:02:12.910 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 05:02:13 localhost nova_compute[280939]: 2025-11-23 10:02:13.131 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:02:13 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:13.201 2 INFO neutron.agent.securitygroups_rpc [None req-e36b687d-978f-4757-a141-a3de7329fae8 404dc86aa2d547d1b035a15b21cd31a6 3bcc515473444ea195be635c77c65d0f - - default default] Security group member updated ['742a2d87-87e7-4137-bfc5-9bbcf8237faa']#033[00m Nov 23 05:02:13 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:13.218 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v237: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 3.0 KiB/s wr, 49 op/s Nov 23 05:02:13 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:13.860 2 INFO neutron.agent.securitygroups_rpc [None req-072dc881-3667-4cc2-b265-d09f039c6880 404dc86aa2d547d1b035a15b21cd31a6 3bcc515473444ea195be635c77c65d0f - - default default] Security group member updated ['742a2d87-87e7-4137-bfc5-9bbcf8237faa']#033[00m Nov 23 05:02:13 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:13.887 2 INFO neutron.agent.securitygroups_rpc [None req-a4d2f133-97c5-4bde-ae86-9705c747c91a 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:02:14 localhost nova_compute[280939]: 2025-11-23 10:02:14.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:02:14 localhost nova_compute[280939]: 2025-11-23 10:02:14.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 05:02:14 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:14.639 262301 INFO neutron.agent.linux.ip_lib [None req-89084a84-5a50-440e-a77a-c2a0ccc6c97e - - - - - -] Device tapb4d3f72a-29 cannot be used as it has no MAC address#033[00m Nov 23 05:02:14 localhost nova_compute[280939]: 2025-11-23 10:02:14.700 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:14 localhost kernel: device tapb4d3f72a-29 entered promiscuous mode Nov 23 05:02:14 localhost NetworkManager[5966]: [1763892134.7091] manager: (tapb4d3f72a-29): new Generic device (/org/freedesktop/NetworkManager/Devices/35) Nov 23 05:02:14 localhost nova_compute[280939]: 2025-11-23 10:02:14.708 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:14 localhost ovn_controller[153771]: 2025-11-23T10:02:14Z|00209|binding|INFO|Claiming lport b4d3f72a-2961-4959-a1e1-c969c9990b01 for this chassis. Nov 23 05:02:14 localhost ovn_controller[153771]: 2025-11-23T10:02:14Z|00210|binding|INFO|b4d3f72a-2961-4959-a1e1-c969c9990b01: Claiming unknown Nov 23 05:02:14 localhost systemd-udevd[316622]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:02:14 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:14.723 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe02:70ba/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-125f400a-40a6-42ec-a07c-6307386e5464', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-125f400a-40a6-42ec-a07c-6307386e5464', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23ffb5a89d5d4d8a8900ea750309030f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3771cc7-0351-4ebc-b98f-a83e75774323, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b4d3f72a-2961-4959-a1e1-c969c9990b01) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:14 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:14.725 159415 INFO neutron.agent.ovn.metadata.agent [-] Port b4d3f72a-2961-4959-a1e1-c969c9990b01 in datapath 125f400a-40a6-42ec-a07c-6307386e5464 bound to our chassis#033[00m Nov 23 05:02:14 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:14.727 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port aa3e6068-2f89-485e-8565-5ef3ce85316f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 05:02:14 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:14.728 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 125f400a-40a6-42ec-a07c-6307386e5464, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:02:14 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:14.730 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[62cdd188-f4af-44e4-b017-78cb23d5a47a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:02:14 localhost journal[229336]: ethtool ioctl error on tapb4d3f72a-29: No such device Nov 23 05:02:14 localhost ovn_controller[153771]: 2025-11-23T10:02:14Z|00211|binding|INFO|Setting lport b4d3f72a-2961-4959-a1e1-c969c9990b01 ovn-installed in OVS Nov 23 05:02:14 localhost ovn_controller[153771]: 2025-11-23T10:02:14Z|00212|binding|INFO|Setting lport b4d3f72a-2961-4959-a1e1-c969c9990b01 up in Southbound Nov 23 05:02:14 localhost nova_compute[280939]: 2025-11-23 10:02:14.745 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:14 localhost journal[229336]: ethtool ioctl error on tapb4d3f72a-29: No such device Nov 23 05:02:14 localhost journal[229336]: ethtool ioctl error on tapb4d3f72a-29: No such device Nov 23 05:02:14 localhost journal[229336]: ethtool ioctl error on tapb4d3f72a-29: No such device Nov 23 05:02:14 localhost journal[229336]: ethtool ioctl error on tapb4d3f72a-29: No such device Nov 23 05:02:14 localhost journal[229336]: ethtool ioctl error on tapb4d3f72a-29: No such device Nov 23 05:02:14 localhost journal[229336]: ethtool ioctl error on tapb4d3f72a-29: No such device Nov 23 05:02:14 localhost journal[229336]: ethtool ioctl error on tapb4d3f72a-29: No such device Nov 23 05:02:14 localhost nova_compute[280939]: 2025-11-23 10:02:14.784 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:14 localhost nova_compute[280939]: 2025-11-23 10:02:14.812 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:14 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:14.818 2 INFO neutron.agent.securitygroups_rpc [None req-5062901d-b55d-4422-a232-c0dc20b0538f 268d02d4288b4ff3a9dab77419bcf96a 23ffb5a89d5d4d8a8900ea750309030f - - default default] Security group member updated ['8eb14703-b106-4f91-b864-8b16a806bee3']#033[00m Nov 23 05:02:14 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:14.837 2 INFO neutron.agent.securitygroups_rpc [None req-e9c4a6cc-b7de-4f6d-9d17-231261e88eb0 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:02:15 localhost nova_compute[280939]: 2025-11-23 10:02:15.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:02:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 185 MiB data, 860 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 4.0 MiB/s wr, 47 op/s Nov 23 05:02:15 localhost podman[316694]: Nov 23 05:02:15 localhost podman[316694]: 2025-11-23 10:02:15.6391691 +0000 UTC m=+0.084939130 container create 1c15c28c1f7e8c5b9eb146ed8b0c036434d74218f60744c0981e1235fc431293 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-125f400a-40a6-42ec-a07c-6307386e5464, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 23 05:02:15 localhost systemd[1]: Started libpod-conmon-1c15c28c1f7e8c5b9eb146ed8b0c036434d74218f60744c0981e1235fc431293.scope. Nov 23 05:02:15 localhost podman[316694]: 2025-11-23 10:02:15.598109584 +0000 UTC m=+0.043879634 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:02:15 localhost systemd[1]: Started libcrun container. Nov 23 05:02:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4f605b6b2771525fc3a505f2be6643f8b288a152a78127d97f2fd8c9c774b6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:02:15 localhost podman[316694]: 2025-11-23 10:02:15.720117706 +0000 UTC m=+0.165887736 container init 1c15c28c1f7e8c5b9eb146ed8b0c036434d74218f60744c0981e1235fc431293 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-125f400a-40a6-42ec-a07c-6307386e5464, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:02:15 localhost podman[316694]: 2025-11-23 10:02:15.729377021 +0000 UTC m=+0.175147051 container start 1c15c28c1f7e8c5b9eb146ed8b0c036434d74218f60744c0981e1235fc431293 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-125f400a-40a6-42ec-a07c-6307386e5464, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:02:15 localhost dnsmasq[316713]: started, version 2.85 cachesize 150 Nov 23 05:02:15 localhost dnsmasq[316713]: DNS service limited to local subnets Nov 23 05:02:15 localhost dnsmasq[316713]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:02:15 localhost dnsmasq[316713]: warning: no upstream servers configured Nov 23 05:02:15 localhost dnsmasq-dhcp[316713]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 23 05:02:15 localhost nova_compute[280939]: 2025-11-23 10:02:15.777 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:15 localhost dnsmasq[316713]: read /var/lib/neutron/dhcp/125f400a-40a6-42ec-a07c-6307386e5464/addn_hosts - 0 addresses Nov 23 05:02:15 localhost dnsmasq-dhcp[316713]: read /var/lib/neutron/dhcp/125f400a-40a6-42ec-a07c-6307386e5464/host Nov 23 05:02:15 localhost dnsmasq-dhcp[316713]: read /var/lib/neutron/dhcp/125f400a-40a6-42ec-a07c-6307386e5464/opts Nov 23 05:02:15 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:15.835 262301 INFO neutron.agent.dhcp.agent [None req-89084a84-5a50-440e-a77a-c2a0ccc6c97e - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:02:14Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=70bb5b5e-f45b-4d5d-9545-25e161cb0176, ip_allocation=immediate, mac_address=fa:16:3e:d8:6a:c9, name=tempest-NetworksIpV6TestAttrs-1378991826, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:02:09Z, description=, dns_domain=, id=125f400a-40a6-42ec-a07c-6307386e5464, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksIpV6TestAttrs-test-network-893411618, port_security_enabled=True, project_id=23ffb5a89d5d4d8a8900ea750309030f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=49993, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1770, status=ACTIVE, subnets=['7cea6df3-3e0a-45e0-9cd0-48b2e1417ca7'], tags=[], tenant_id=23ffb5a89d5d4d8a8900ea750309030f, updated_at=2025-11-23T10:02:12Z, vlan_transparent=None, network_id=125f400a-40a6-42ec-a07c-6307386e5464, port_security_enabled=True, project_id=23ffb5a89d5d4d8a8900ea750309030f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['8eb14703-b106-4f91-b864-8b16a806bee3'], standard_attr_id=1814, status=DOWN, tags=[], tenant_id=23ffb5a89d5d4d8a8900ea750309030f, updated_at=2025-11-23T10:02:14Z on network 125f400a-40a6-42ec-a07c-6307386e5464#033[00m Nov 23 05:02:15 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:15.931 262301 INFO neutron.agent.dhcp.agent [None req-e7363a6b-8c57-40ad-bf5c-be60928e1331 - - - - - -] DHCP configuration for ports {'167ee09a-bd11-490f-b820-d60ef250e0de'} is completed#033[00m Nov 23 05:02:16 localhost podman[316732]: 2025-11-23 10:02:16.023420288 +0000 UTC m=+0.055332507 container kill 1c15c28c1f7e8c5b9eb146ed8b0c036434d74218f60744c0981e1235fc431293 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-125f400a-40a6-42ec-a07c-6307386e5464, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 05:02:16 localhost dnsmasq[316713]: read /var/lib/neutron/dhcp/125f400a-40a6-42ec-a07c-6307386e5464/addn_hosts - 1 addresses Nov 23 05:02:16 localhost dnsmasq-dhcp[316713]: read /var/lib/neutron/dhcp/125f400a-40a6-42ec-a07c-6307386e5464/host Nov 23 05:02:16 localhost dnsmasq-dhcp[316713]: read /var/lib/neutron/dhcp/125f400a-40a6-42ec-a07c-6307386e5464/opts Nov 23 05:02:16 localhost nova_compute[280939]: 2025-11-23 10:02:16.128 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:02:16 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:16.324 262301 INFO neutron.agent.dhcp.agent [None req-85a2c4f5-db36-44ed-a475-b127ea0e0755 - - - - - -] DHCP configuration for ports {'70bb5b5e-f45b-4d5d-9545-25e161cb0176'} is completed#033[00m Nov 23 05:02:16 localhost podman[316768]: 2025-11-23 10:02:16.436067062 +0000 UTC m=+0.061442465 container kill 1c15c28c1f7e8c5b9eb146ed8b0c036434d74218f60744c0981e1235fc431293 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-125f400a-40a6-42ec-a07c-6307386e5464, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 23 05:02:16 localhost dnsmasq[316713]: exiting on receipt of SIGTERM Nov 23 05:02:16 localhost systemd[1]: libpod-1c15c28c1f7e8c5b9eb146ed8b0c036434d74218f60744c0981e1235fc431293.scope: Deactivated successfully. Nov 23 05:02:16 localhost podman[316780]: 2025-11-23 10:02:16.515051458 +0000 UTC m=+0.063372716 container died 1c15c28c1f7e8c5b9eb146ed8b0c036434d74218f60744c0981e1235fc431293 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-125f400a-40a6-42ec-a07c-6307386e5464, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:02:16 localhost podman[316780]: 2025-11-23 10:02:16.54951815 +0000 UTC m=+0.097839368 container cleanup 1c15c28c1f7e8c5b9eb146ed8b0c036434d74218f60744c0981e1235fc431293 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-125f400a-40a6-42ec-a07c-6307386e5464, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:02:16 localhost systemd[1]: libpod-conmon-1c15c28c1f7e8c5b9eb146ed8b0c036434d74218f60744c0981e1235fc431293.scope: Deactivated successfully. Nov 23 05:02:16 localhost podman[316782]: 2025-11-23 10:02:16.59266434 +0000 UTC m=+0.134558200 container remove 1c15c28c1f7e8c5b9eb146ed8b0c036434d74218f60744c0981e1235fc431293 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-125f400a-40a6-42ec-a07c-6307386e5464, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:02:16 localhost nova_compute[280939]: 2025-11-23 10:02:16.607 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:16 localhost kernel: device tapb4d3f72a-29 left promiscuous mode Nov 23 05:02:16 localhost ovn_controller[153771]: 2025-11-23T10:02:16Z|00213|binding|INFO|Releasing lport b4d3f72a-2961-4959-a1e1-c969c9990b01 from this chassis (sb_readonly=0) Nov 23 05:02:16 localhost ovn_controller[153771]: 2025-11-23T10:02:16Z|00214|binding|INFO|Setting lport b4d3f72a-2961-4959-a1e1-c969c9990b01 down in Southbound Nov 23 05:02:16 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:16.617 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe02:70ba/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-125f400a-40a6-42ec-a07c-6307386e5464', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-125f400a-40a6-42ec-a07c-6307386e5464', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23ffb5a89d5d4d8a8900ea750309030f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3771cc7-0351-4ebc-b98f-a83e75774323, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b4d3f72a-2961-4959-a1e1-c969c9990b01) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:16 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:16.619 159415 INFO neutron.agent.ovn.metadata.agent [-] Port b4d3f72a-2961-4959-a1e1-c969c9990b01 in datapath 125f400a-40a6-42ec-a07c-6307386e5464 unbound from our chassis#033[00m Nov 23 05:02:16 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:16.621 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 125f400a-40a6-42ec-a07c-6307386e5464, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:02:16 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:16.622 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[10b1f97d-093f-4463-b57e-3ba76fca0a75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:02:16 localhost nova_compute[280939]: 2025-11-23 10:02:16.636 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:16 localhost nova_compute[280939]: 2025-11-23 10:02:16.638 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:16 localhost systemd[1]: var-lib-containers-storage-overlay-1e4f605b6b2771525fc3a505f2be6643f8b288a152a78127d97f2fd8c9c774b6-merged.mount: Deactivated successfully. Nov 23 05:02:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c15c28c1f7e8c5b9eb146ed8b0c036434d74218f60744c0981e1235fc431293-userdata-shm.mount: Deactivated successfully. Nov 23 05:02:16 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:16.856 262301 INFO neutron.agent.dhcp.agent [None req-ae74bdc1-0ec0-47e5-a9cb-95e704604dc8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:16 localhost systemd[1]: run-netns-qdhcp\x2d125f400a\x2d40a6\x2d42ec\x2da07c\x2d6307386e5464.mount: Deactivated successfully. Nov 23 05:02:17 localhost podman[239764]: time="2025-11-23T10:02:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:02:17 localhost podman[239764]: @ - - [23/Nov/2025:10:02:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:02:17 localhost podman[239764]: @ - - [23/Nov/2025:10:02:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18741 "" "Go-http-client/1.1" Nov 23 05:02:17 localhost nova_compute[280939]: 2025-11-23 10:02:17.172 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e116 do_prune osdmap full prune enabled Nov 23 05:02:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e117 e117: 6 total, 6 up, 6 in Nov 23 05:02:17 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e117: 6 total, 6 up, 6 in Nov 23 05:02:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v240: 177 pgs: 177 active+clean; 185 MiB data, 860 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 4.6 MiB/s wr, 53 op/s Nov 23 05:02:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:02:17 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:17.928 262301 INFO neutron.agent.linux.ip_lib [None req-7720e503-7409-4499-8e79-92ccfefcce22 - - - - - -] Device tap0d79a49e-fa cannot be used as it has no MAC address#033[00m Nov 23 05:02:17 localhost nova_compute[280939]: 2025-11-23 10:02:17.948 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:17 localhost kernel: device tap0d79a49e-fa entered promiscuous mode Nov 23 05:02:17 localhost NetworkManager[5966]: [1763892137.9556] manager: (tap0d79a49e-fa): new Generic device (/org/freedesktop/NetworkManager/Devices/36) Nov 23 05:02:17 localhost nova_compute[280939]: 2025-11-23 10:02:17.955 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:17 localhost ovn_controller[153771]: 2025-11-23T10:02:17Z|00215|binding|INFO|Claiming lport 0d79a49e-fad1-41c8-97c6-7f202767d2cf for this chassis. Nov 23 05:02:17 localhost ovn_controller[153771]: 2025-11-23T10:02:17Z|00216|binding|INFO|0d79a49e-fad1-41c8-97c6-7f202767d2cf: Claiming unknown Nov 23 05:02:17 localhost systemd-udevd[316823]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:02:17 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:17.967 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-0159c9f1-c819-48f9-801d-3fe76bf69225', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0159c9f1-c819-48f9-801d-3fe76bf69225', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e3b93ef61044aafb71b30163a32d7ac', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b518cf7-b802-4f56-b20b-eb0b85ba8790, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=0d79a49e-fad1-41c8-97c6-7f202767d2cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:17 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:17.969 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 0d79a49e-fad1-41c8-97c6-7f202767d2cf in datapath 0159c9f1-c819-48f9-801d-3fe76bf69225 bound to our chassis#033[00m Nov 23 05:02:17 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:17.971 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0159c9f1-c819-48f9-801d-3fe76bf69225 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:02:17 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:17.975 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[8908164a-c780-41b8-8f2c-699ec3d0cdda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:02:17 localhost nova_compute[280939]: 2025-11-23 10:02:17.992 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:17 localhost ovn_controller[153771]: 2025-11-23T10:02:17Z|00217|binding|INFO|Setting lport 0d79a49e-fad1-41c8-97c6-7f202767d2cf ovn-installed in OVS Nov 23 05:02:17 localhost ovn_controller[153771]: 2025-11-23T10:02:17Z|00218|binding|INFO|Setting lport 0d79a49e-fad1-41c8-97c6-7f202767d2cf up in Southbound Nov 23 05:02:17 localhost nova_compute[280939]: 2025-11-23 10:02:17.995 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:18 localhost nova_compute[280939]: 2025-11-23 10:02:18.031 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:18 localhost nova_compute[280939]: 2025-11-23 10:02:18.057 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:18 localhost nova_compute[280939]: 2025-11-23 10:02:18.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:02:18 localhost nova_compute[280939]: 2025-11-23 10:02:18.152 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:02:18 localhost nova_compute[280939]: 2025-11-23 10:02:18.153 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:02:18 localhost nova_compute[280939]: 2025-11-23 10:02:18.154 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:02:18 localhost nova_compute[280939]: 2025-11-23 10:02:18.154 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 05:02:18 localhost nova_compute[280939]: 2025-11-23 10:02:18.155 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:02:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:02:18 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3211337825' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:02:18 localhost nova_compute[280939]: 2025-11-23 10:02:18.626 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:02:18 localhost nova_compute[280939]: 2025-11-23 10:02:18.763 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 05:02:18 localhost nova_compute[280939]: 2025-11-23 10:02:18.764 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11547MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 05:02:18 localhost nova_compute[280939]: 2025-11-23 10:02:18.765 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:02:18 localhost nova_compute[280939]: 2025-11-23 10:02:18.766 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:02:18 localhost podman[316901]: Nov 23 05:02:18 localhost nova_compute[280939]: 2025-11-23 10:02:18.872 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 05:02:18 localhost nova_compute[280939]: 2025-11-23 10:02:18.872 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 05:02:18 localhost podman[316901]: 2025-11-23 10:02:18.873624312 +0000 UTC m=+0.087093776 container create 7eed8222e137a1f12293eb127399388c8570ba0e72d536e932c801cd57f4f6de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0159c9f1-c819-48f9-801d-3fe76bf69225, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 23 05:02:18 localhost nova_compute[280939]: 2025-11-23 10:02:18.901 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:02:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e117 do_prune osdmap full prune enabled Nov 23 05:02:18 localhost systemd[1]: Started libpod-conmon-7eed8222e137a1f12293eb127399388c8570ba0e72d536e932c801cd57f4f6de.scope. Nov 23 05:02:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:02:18 localhost podman[316901]: 2025-11-23 10:02:18.830340597 +0000 UTC m=+0.043810081 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:02:18 localhost systemd[1]: Started libcrun container. Nov 23 05:02:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab3c57236425e881640f42da00878a8a38d525bf6f14914ff0753c6ce8eb0bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:02:18 localhost podman[316901]: 2025-11-23 10:02:18.956361983 +0000 UTC m=+0.169831427 container init 7eed8222e137a1f12293eb127399388c8570ba0e72d536e932c801cd57f4f6de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0159c9f1-c819-48f9-801d-3fe76bf69225, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Nov 23 05:02:18 localhost podman[316901]: 2025-11-23 10:02:18.969312573 +0000 UTC m=+0.182782017 container start 7eed8222e137a1f12293eb127399388c8570ba0e72d536e932c801cd57f4f6de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0159c9f1-c819-48f9-801d-3fe76bf69225, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Nov 23 05:02:18 localhost dnsmasq[316930]: started, version 2.85 cachesize 150 Nov 23 05:02:18 localhost dnsmasq[316930]: DNS service limited to local subnets Nov 23 05:02:18 localhost dnsmasq[316930]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:02:18 localhost dnsmasq[316930]: warning: no upstream servers configured Nov 23 05:02:18 localhost dnsmasq-dhcp[316930]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 23 05:02:18 localhost dnsmasq[316930]: read /var/lib/neutron/dhcp/0159c9f1-c819-48f9-801d-3fe76bf69225/addn_hosts - 0 addresses Nov 23 05:02:18 localhost dnsmasq-dhcp[316930]: read /var/lib/neutron/dhcp/0159c9f1-c819-48f9-801d-3fe76bf69225/host Nov 23 05:02:18 localhost dnsmasq-dhcp[316930]: read /var/lib/neutron/dhcp/0159c9f1-c819-48f9-801d-3fe76bf69225/opts Nov 23 05:02:19 localhost podman[316919]: 2025-11-23 10:02:19.033401419 +0000 UTC m=+0.092235745 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:02:19 localhost podman[316919]: 2025-11-23 10:02:19.06716873 +0000 UTC m=+0.126003066 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118) Nov 23 05:02:19 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:02:19 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:19.120 262301 INFO neutron.agent.dhcp.agent [None req-30cf01be-9174-4529-a66c-b6bc09ab1f10 - - - - - -] DHCP configuration for ports {'7e6d8432-0cf7-468b-b6b5-5ba326341d36'} is completed#033[00m Nov 23 05:02:19 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e118 e118: 6 total, 6 up, 6 in Nov 23 05:02:19 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e118: 6 total, 6 up, 6 in Nov 23 05:02:19 localhost ovn_controller[153771]: 2025-11-23T10:02:19Z|00219|binding|INFO|Removing iface tap0d79a49e-fa ovn-installed in OVS Nov 23 05:02:19 localhost ovn_controller[153771]: 2025-11-23T10:02:19Z|00220|binding|INFO|Removing lport 0d79a49e-fad1-41c8-97c6-7f202767d2cf ovn-installed in OVS Nov 23 05:02:19 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:19.284 159415 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port b618da76-ccbf-420c-8d83-1d675de16cc4 with type ""#033[00m Nov 23 05:02:19 localhost nova_compute[280939]: 2025-11-23 10:02:19.285 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:19 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:19.285 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-0159c9f1-c819-48f9-801d-3fe76bf69225', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0159c9f1-c819-48f9-801d-3fe76bf69225', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e3b93ef61044aafb71b30163a32d7ac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b518cf7-b802-4f56-b20b-eb0b85ba8790, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=0d79a49e-fad1-41c8-97c6-7f202767d2cf) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:19 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:19.288 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 0d79a49e-fad1-41c8-97c6-7f202767d2cf in datapath 0159c9f1-c819-48f9-801d-3fe76bf69225 unbound from our chassis#033[00m Nov 23 05:02:19 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:19.289 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0159c9f1-c819-48f9-801d-3fe76bf69225 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:02:19 localhost nova_compute[280939]: 2025-11-23 10:02:19.290 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:19 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:19.290 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[b476f9d7-9b95-453c-90e3-eb12a426c7a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:02:19 localhost dnsmasq[316930]: exiting on receipt of SIGTERM Nov 23 05:02:19 localhost podman[316974]: 2025-11-23 10:02:19.302168816 +0000 UTC m=+0.077790810 container kill 7eed8222e137a1f12293eb127399388c8570ba0e72d536e932c801cd57f4f6de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0159c9f1-c819-48f9-801d-3fe76bf69225, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:02:19 localhost systemd[1]: libpod-7eed8222e137a1f12293eb127399388c8570ba0e72d536e932c801cd57f4f6de.scope: Deactivated successfully. Nov 23 05:02:19 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:02:19 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1242920347' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:02:19 localhost podman[316988]: 2025-11-23 10:02:19.389786058 +0000 UTC m=+0.064609904 container died 7eed8222e137a1f12293eb127399388c8570ba0e72d536e932c801cd57f4f6de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0159c9f1-c819-48f9-801d-3fe76bf69225, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 23 05:02:19 localhost nova_compute[280939]: 2025-11-23 10:02:19.403 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:02:19 localhost nova_compute[280939]: 2025-11-23 10:02:19.412 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 05:02:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v242: 177 pgs: 8 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 163 active+clean; 257 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 14 MiB/s wr, 58 op/s Nov 23 05:02:19 localhost nova_compute[280939]: 2025-11-23 10:02:19.432 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 05:02:19 localhost nova_compute[280939]: 2025-11-23 10:02:19.435 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 05:02:19 localhost nova_compute[280939]: 2025-11-23 10:02:19.435 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:02:19 localhost podman[316988]: 2025-11-23 10:02:19.485370635 +0000 UTC m=+0.160194431 container remove 7eed8222e137a1f12293eb127399388c8570ba0e72d536e932c801cd57f4f6de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0159c9f1-c819-48f9-801d-3fe76bf69225, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:02:19 localhost systemd[1]: libpod-conmon-7eed8222e137a1f12293eb127399388c8570ba0e72d536e932c801cd57f4f6de.scope: Deactivated successfully. Nov 23 05:02:19 localhost nova_compute[280939]: 2025-11-23 10:02:19.495 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:19 localhost kernel: device tap0d79a49e-fa left promiscuous mode Nov 23 05:02:19 localhost nova_compute[280939]: 2025-11-23 10:02:19.563 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:19 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:19.575 262301 INFO neutron.agent.dhcp.agent [None req-002a3f98-0f97-42ae-b940-f1b1e25aec09 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:19 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:19.576 262301 INFO neutron.agent.dhcp.agent [None req-002a3f98-0f97-42ae-b940-f1b1e25aec09 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:19 localhost systemd[1]: tmp-crun.gAcs5O.mount: Deactivated successfully. Nov 23 05:02:19 localhost systemd[1]: var-lib-containers-storage-overlay-7ab3c57236425e881640f42da00878a8a38d525bf6f14914ff0753c6ce8eb0bf-merged.mount: Deactivated successfully. Nov 23 05:02:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7eed8222e137a1f12293eb127399388c8570ba0e72d536e932c801cd57f4f6de-userdata-shm.mount: Deactivated successfully. Nov 23 05:02:19 localhost systemd[1]: run-netns-qdhcp\x2d0159c9f1\x2dc819\x2d48f9\x2d801d\x2d3fe76bf69225.mount: Deactivated successfully. Nov 23 05:02:19 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:19.907 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e118 do_prune osdmap full prune enabled Nov 23 05:02:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e119 e119: 6 total, 6 up, 6 in Nov 23 05:02:20 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e119: 6 total, 6 up, 6 in Nov 23 05:02:20 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:20.424 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 2001:db8:0:1:f816:3eff:feec:8f43'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 2001:db8::f816:3eff:feec:8f43'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:20 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:20.426 159415 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m Nov 23 05:02:20 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:20.429 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:02:20 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:20.430 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[b96cc969-c5fb-4a6c-bcfc-9cad0b65a55e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:02:20 localhost nova_compute[280939]: 2025-11-23 10:02:20.437 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:02:20 localhost nova_compute[280939]: 2025-11-23 10:02:20.500 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:20 localhost nova_compute[280939]: 2025-11-23 10:02:20.815 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:20 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:20.962 2 INFO neutron.agent.securitygroups_rpc [None req-da0f81f0-068a-49d7-b6f1-50fa28f5c3fa 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v244: 177 pgs: 8 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 163 active+clean; 257 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 12 MiB/s wr, 50 op/s Nov 23 05:02:21 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:21.471 2 INFO neutron.agent.securitygroups_rpc [None req-2c014fe0-b7cd-43b0-aa6e-452db82dfd05 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:21 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:21.891 2 INFO neutron.agent.securitygroups_rpc [None req-7716ba44-8c0f-4db3-b436-de264dd9940d 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:02:22 localhost nova_compute[280939]: 2025-11-23 10:02:22.205 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:22 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:22.258 2 INFO neutron.agent.securitygroups_rpc [None req-2377c616-101f-418f-a8e9-4c617ed1b658 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:22 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:22.847 2 INFO neutron.agent.securitygroups_rpc [None req-5aab4fba-63d5-4959-8f5e-411b7878d60b 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:02:22 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:22.947 2 INFO neutron.agent.securitygroups_rpc [None req-7111b2ef-5dc8-48f5-9c8c-9b4c6e4004a1 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:02:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_10:02:23 Nov 23 05:02:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 05:02:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 05:02:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['images', 'manila_metadata', '.mgr', 'volumes', 'manila_data', 'backups', 'vms'] Nov 23 05:02:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 05:02:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:02:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:02:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:02:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:02:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:02:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:02:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v245: 177 pgs: 8 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 163 active+clean; 257 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 12 MiB/s wr, 50 op/s Nov 23 05:02:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 05:02:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:02:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 05:02:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:02:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Nov 23 05:02:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:02:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.443522589800856e-05 quantized to 32 (current 32) Nov 23 05:02:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:02:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0121178810720268 of space, bias 1.0, pg target 2.4195369207146844 quantized to 32 (current 32) Nov 23 05:02:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:02:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 05:02:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:02:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 05:02:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:02:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019367671691792295 quantized to 16 (current 16) Nov 23 05:02:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 05:02:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:02:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 05:02:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:02:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:02:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:02:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:02:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:02:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:02:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:02:23 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:23.602 2 INFO neutron.agent.securitygroups_rpc [None req-192c090e-5c5a-4cbc-acb5-d1edd0c7e4bb 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e119 do_prune osdmap full prune enabled Nov 23 05:02:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e120 e120: 6 total, 6 up, 6 in Nov 23 05:02:25 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e120: 6 total, 6 up, 6 in Nov 23 05:02:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v247: 177 pgs: 177 active+clean; 145 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 12 MiB/s wr, 111 op/s Nov 23 05:02:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:02:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:02:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:02:25 localhost nova_compute[280939]: 2025-11-23 10:02:25.846 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:25 localhost systemd[1]: tmp-crun.Cu05i4.mount: Deactivated successfully. Nov 23 05:02:25 localhost podman[317016]: 2025-11-23 10:02:25.95984956 +0000 UTC m=+0.135556261 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 05:02:25 localhost podman[317015]: 2025-11-23 10:02:25.914290595 +0000 UTC m=+0.093637909 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:02:25 localhost podman[317017]: 2025-11-23 10:02:25.99718051 +0000 UTC m=+0.140640657 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:02:26 localhost podman[317016]: 2025-11-23 10:02:26.023879764 +0000 UTC m=+0.199586475 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 05:02:26 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:02:26 localhost podman[317017]: 2025-11-23 10:02:26.037471613 +0000 UTC m=+0.180931800 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:02:26 localhost podman[317015]: 2025-11-23 10:02:26.049020439 +0000 UTC m=+0.228367733 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 05:02:26 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:02:26 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:02:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e120 do_prune osdmap full prune enabled Nov 23 05:02:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e121 e121: 6 total, 6 up, 6 in Nov 23 05:02:26 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e121: 6 total, 6 up, 6 in Nov 23 05:02:26 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:26.339 2 INFO neutron.agent.securitygroups_rpc [None req-d166c481-1e29-4957-bae4-8d46f816d4e6 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:26 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:26.800 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:26 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:26.931 262301 INFO neutron.agent.linux.ip_lib [None req-0193c715-b2a5-4f7e-ac3c-2276931024ea - - - - - -] Device tapf5d9881e-44 cannot be used as it has no MAC address#033[00m Nov 23 05:02:26 localhost nova_compute[280939]: 2025-11-23 10:02:26.988 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:26 localhost kernel: device tapf5d9881e-44 entered promiscuous mode Nov 23 05:02:26 localhost NetworkManager[5966]: [1763892146.9953] manager: (tapf5d9881e-44): new Generic device (/org/freedesktop/NetworkManager/Devices/37) Nov 23 05:02:26 localhost ovn_controller[153771]: 2025-11-23T10:02:26Z|00221|binding|INFO|Claiming lport f5d9881e-4498-4681-ac41-f217b3074489 for this chassis. Nov 23 05:02:26 localhost ovn_controller[153771]: 2025-11-23T10:02:26Z|00222|binding|INFO|f5d9881e-4498-4681-ac41-f217b3074489: Claiming unknown Nov 23 05:02:26 localhost nova_compute[280939]: 2025-11-23 10:02:26.996 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:27 localhost systemd-udevd[317095]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:02:27 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:27.009 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-9882d455-c8d8-4604-a8f8-e824496e15cb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9882d455-c8d8-4604-a8f8-e824496e15cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23ffb5a89d5d4d8a8900ea750309030f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b5e1249-89d5-474b-962f-df9e04f9c4e7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f5d9881e-4498-4681-ac41-f217b3074489) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:27 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:27.010 159415 INFO neutron.agent.ovn.metadata.agent [-] Port f5d9881e-4498-4681-ac41-f217b3074489 in datapath 9882d455-c8d8-4604-a8f8-e824496e15cb bound to our chassis#033[00m Nov 23 05:02:27 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:27.014 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9882d455-c8d8-4604-a8f8-e824496e15cb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:02:27 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:27.015 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[c72feab0-e6fd-4e50-8e2f-6cacc5451003]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:02:27 localhost journal[229336]: ethtool ioctl error on tapf5d9881e-44: No such device Nov 23 05:02:27 localhost ovn_controller[153771]: 2025-11-23T10:02:27Z|00223|binding|INFO|Setting lport f5d9881e-4498-4681-ac41-f217b3074489 ovn-installed in OVS Nov 23 05:02:27 localhost ovn_controller[153771]: 2025-11-23T10:02:27Z|00224|binding|INFO|Setting lport f5d9881e-4498-4681-ac41-f217b3074489 up in Southbound Nov 23 05:02:27 localhost nova_compute[280939]: 2025-11-23 10:02:27.032 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:27 localhost journal[229336]: ethtool ioctl error on tapf5d9881e-44: No such device Nov 23 05:02:27 localhost journal[229336]: ethtool ioctl error on tapf5d9881e-44: No such device Nov 23 05:02:27 localhost journal[229336]: ethtool ioctl error on tapf5d9881e-44: No such device Nov 23 05:02:27 localhost journal[229336]: ethtool ioctl error on tapf5d9881e-44: No such device Nov 23 05:02:27 localhost journal[229336]: ethtool ioctl error on tapf5d9881e-44: No such device Nov 23 05:02:27 localhost journal[229336]: ethtool ioctl error on tapf5d9881e-44: No such device Nov 23 05:02:27 localhost journal[229336]: ethtool ioctl error on tapf5d9881e-44: No such device Nov 23 05:02:27 localhost nova_compute[280939]: 2025-11-23 10:02:27.066 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:27 localhost nova_compute[280939]: 2025-11-23 10:02:27.094 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e121 do_prune osdmap full prune enabled Nov 23 05:02:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e122 e122: 6 total, 6 up, 6 in Nov 23 05:02:27 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e122: 6 total, 6 up, 6 in Nov 23 05:02:27 localhost nova_compute[280939]: 2025-11-23 10:02:27.207 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v250: 177 pgs: 177 active+clean; 145 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 3.5 KiB/s wr, 63 op/s Nov 23 05:02:27 localhost ovn_controller[153771]: 2025-11-23T10:02:27Z|00225|binding|INFO|Removing iface tapf5d9881e-44 ovn-installed in OVS Nov 23 05:02:27 localhost ovn_controller[153771]: 2025-11-23T10:02:27Z|00226|binding|INFO|Removing lport f5d9881e-4498-4681-ac41-f217b3074489 ovn-installed in OVS Nov 23 05:02:27 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:27.616 159415 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 50d3a661-3f16-4b9f-bd7d-41d0a997e83c with type ""#033[00m Nov 23 05:02:27 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:27.617 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-9882d455-c8d8-4604-a8f8-e824496e15cb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9882d455-c8d8-4604-a8f8-e824496e15cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23ffb5a89d5d4d8a8900ea750309030f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b5e1249-89d5-474b-962f-df9e04f9c4e7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f5d9881e-4498-4681-ac41-f217b3074489) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:27 localhost nova_compute[280939]: 2025-11-23 10:02:27.617 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:27 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:27.620 159415 INFO neutron.agent.ovn.metadata.agent [-] Port f5d9881e-4498-4681-ac41-f217b3074489 in datapath 9882d455-c8d8-4604-a8f8-e824496e15cb unbound from our chassis#033[00m Nov 23 05:02:27 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:27.621 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9882d455-c8d8-4604-a8f8-e824496e15cb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:02:27 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:27.622 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[22281966-adf9-4476-9f71-b22e3c5316b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:02:27 localhost nova_compute[280939]: 2025-11-23 10:02:27.624 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:27 localhost nova_compute[280939]: 2025-11-23 10:02:27.854 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:27 localhost podman[317166]: Nov 23 05:02:27 localhost podman[317166]: 2025-11-23 10:02:27.891809621 +0000 UTC m=+0.087636793 container create 54fb5daae23bd98e69eb8a6b2aa8d1e64d7d510b59ff0cee854e3f0282b7c1d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9882d455-c8d8-4604-a8f8-e824496e15cb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118) Nov 23 05:02:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:02:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e122 do_prune osdmap full prune enabled Nov 23 05:02:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e123 e123: 6 total, 6 up, 6 in Nov 23 05:02:27 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e123: 6 total, 6 up, 6 in Nov 23 05:02:27 localhost systemd[1]: Started libpod-conmon-54fb5daae23bd98e69eb8a6b2aa8d1e64d7d510b59ff0cee854e3f0282b7c1d1.scope. Nov 23 05:02:27 localhost podman[317166]: 2025-11-23 10:02:27.850515857 +0000 UTC m=+0.046343099 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:02:27 localhost systemd[1]: Started libcrun container. Nov 23 05:02:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01faa6adf89f63c857a754a04162799f2f974601fef7667b327d33eba4889f3a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:02:27 localhost podman[317166]: 2025-11-23 10:02:27.969473596 +0000 UTC m=+0.165300768 container init 54fb5daae23bd98e69eb8a6b2aa8d1e64d7d510b59ff0cee854e3f0282b7c1d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9882d455-c8d8-4604-a8f8-e824496e15cb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:02:27 localhost podman[317166]: 2025-11-23 10:02:27.981238218 +0000 UTC m=+0.177065390 container start 54fb5daae23bd98e69eb8a6b2aa8d1e64d7d510b59ff0cee854e3f0282b7c1d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9882d455-c8d8-4604-a8f8-e824496e15cb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Nov 23 05:02:27 localhost dnsmasq[317185]: started, version 2.85 cachesize 150 Nov 23 05:02:27 localhost dnsmasq[317185]: DNS service limited to local subnets Nov 23 05:02:27 localhost dnsmasq[317185]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:02:27 localhost dnsmasq[317185]: warning: no upstream servers configured Nov 23 05:02:27 localhost dnsmasq-dhcp[317185]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 23 05:02:27 localhost dnsmasq[317185]: read /var/lib/neutron/dhcp/9882d455-c8d8-4604-a8f8-e824496e15cb/addn_hosts - 0 addresses Nov 23 05:02:27 localhost dnsmasq-dhcp[317185]: read /var/lib/neutron/dhcp/9882d455-c8d8-4604-a8f8-e824496e15cb/host Nov 23 05:02:27 localhost dnsmasq-dhcp[317185]: read /var/lib/neutron/dhcp/9882d455-c8d8-4604-a8f8-e824496e15cb/opts Nov 23 05:02:28 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:28.095 262301 INFO neutron.agent.dhcp.agent [None req-232339c4-9f68-4cea-a298-41b0ffcc8ce1 - - - - - -] DHCP configuration for ports {'b87dd21c-5213-407a-b9e9-c96002e8a009'} is completed#033[00m Nov 23 05:02:28 localhost dnsmasq[317185]: exiting on receipt of SIGTERM Nov 23 05:02:28 localhost podman[317202]: 2025-11-23 10:02:28.218257796 +0000 UTC m=+0.062364603 container kill 54fb5daae23bd98e69eb8a6b2aa8d1e64d7d510b59ff0cee854e3f0282b7c1d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9882d455-c8d8-4604-a8f8-e824496e15cb, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 05:02:28 localhost systemd[1]: libpod-54fb5daae23bd98e69eb8a6b2aa8d1e64d7d510b59ff0cee854e3f0282b7c1d1.scope: Deactivated successfully. Nov 23 05:02:28 localhost podman[317216]: 2025-11-23 10:02:28.298126829 +0000 UTC m=+0.062428235 container died 54fb5daae23bd98e69eb8a6b2aa8d1e64d7d510b59ff0cee854e3f0282b7c1d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9882d455-c8d8-4604-a8f8-e824496e15cb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Nov 23 05:02:28 localhost podman[317216]: 2025-11-23 10:02:28.328562678 +0000 UTC m=+0.092864014 container cleanup 54fb5daae23bd98e69eb8a6b2aa8d1e64d7d510b59ff0cee854e3f0282b7c1d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9882d455-c8d8-4604-a8f8-e824496e15cb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118) Nov 23 05:02:28 localhost systemd[1]: libpod-conmon-54fb5daae23bd98e69eb8a6b2aa8d1e64d7d510b59ff0cee854e3f0282b7c1d1.scope: Deactivated successfully. Nov 23 05:02:28 localhost podman[317217]: 2025-11-23 10:02:28.366794276 +0000 UTC m=+0.126568923 container remove 54fb5daae23bd98e69eb8a6b2aa8d1e64d7d510b59ff0cee854e3f0282b7c1d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9882d455-c8d8-4604-a8f8-e824496e15cb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118) Nov 23 05:02:28 localhost nova_compute[280939]: 2025-11-23 10:02:28.408 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:28 localhost kernel: device tapf5d9881e-44 left promiscuous mode Nov 23 05:02:28 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:28.414 2 INFO neutron.agent.securitygroups_rpc [None req-accb9679-f798-499a-bac7-ef6b44f5ac25 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:02:28 localhost nova_compute[280939]: 2025-11-23 10:02:28.420 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:28 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:28.459 262301 INFO neutron.agent.dhcp.agent [None req-cd766df3-75f9-426d-80e5-953795867736 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:28 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:28.460 262301 INFO neutron.agent.dhcp.agent [None req-cd766df3-75f9-426d-80e5-953795867736 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:28 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:28.740 262301 INFO neutron.agent.linux.ip_lib [None req-0d9445aa-25e3-4660-9cb1-3533e417f9fb - - - - - -] Device tapa17b8c21-c1 cannot be used as it has no MAC address#033[00m Nov 23 05:02:28 localhost nova_compute[280939]: 2025-11-23 10:02:28.761 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:28 localhost kernel: device tapa17b8c21-c1 entered promiscuous mode Nov 23 05:02:28 localhost NetworkManager[5966]: [1763892148.7690] manager: (tapa17b8c21-c1): new Generic device (/org/freedesktop/NetworkManager/Devices/38) Nov 23 05:02:28 localhost nova_compute[280939]: 2025-11-23 10:02:28.769 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:28 localhost ovn_controller[153771]: 2025-11-23T10:02:28Z|00227|binding|INFO|Claiming lport a17b8c21-c13b-4eef-a191-b4be750a21e3 for this chassis. Nov 23 05:02:28 localhost ovn_controller[153771]: 2025-11-23T10:02:28Z|00228|binding|INFO|a17b8c21-c13b-4eef-a191-b4be750a21e3: Claiming unknown Nov 23 05:02:28 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:28.789 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-6925877c-2cbf-490a-b8b9-1074ab06740e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6925877c-2cbf-490a-b8b9-1074ab06740e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e3b93ef61044aafb71b30163a32d7ac', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=016f4e10-cb0b-4ae6-aff5-9e5b35ebc1dd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a17b8c21-c13b-4eef-a191-b4be750a21e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:28 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:28.791 159415 INFO neutron.agent.ovn.metadata.agent [-] Port a17b8c21-c13b-4eef-a191-b4be750a21e3 in datapath 6925877c-2cbf-490a-b8b9-1074ab06740e bound to our chassis#033[00m Nov 23 05:02:28 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:28.793 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6925877c-2cbf-490a-b8b9-1074ab06740e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:02:28 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:28.794 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9ee552-0a9d-4052-b6a2-affd34e3b98d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:02:28 localhost ovn_controller[153771]: 2025-11-23T10:02:28Z|00229|binding|INFO|Setting lport a17b8c21-c13b-4eef-a191-b4be750a21e3 ovn-installed in OVS Nov 23 05:02:28 localhost ovn_controller[153771]: 2025-11-23T10:02:28Z|00230|binding|INFO|Setting lport a17b8c21-c13b-4eef-a191-b4be750a21e3 up in Southbound Nov 23 05:02:28 localhost nova_compute[280939]: 2025-11-23 10:02:28.809 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:28 localhost nova_compute[280939]: 2025-11-23 10:02:28.844 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:28 localhost nova_compute[280939]: 2025-11-23 10:02:28.872 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:28 localhost systemd[1]: var-lib-containers-storage-overlay-01faa6adf89f63c857a754a04162799f2f974601fef7667b327d33eba4889f3a-merged.mount: Deactivated successfully. Nov 23 05:02:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-54fb5daae23bd98e69eb8a6b2aa8d1e64d7d510b59ff0cee854e3f0282b7c1d1-userdata-shm.mount: Deactivated successfully. Nov 23 05:02:28 localhost systemd[1]: run-netns-qdhcp\x2d9882d455\x2dc8d8\x2d4604\x2da8f8\x2de824496e15cb.mount: Deactivated successfully. Nov 23 05:02:28 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:28.916 2 INFO neutron.agent.securitygroups_rpc [None req-cba9b202-7a98-4e4d-911c-f57572c47e81 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m Nov 23 05:02:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e123 do_prune osdmap full prune enabled Nov 23 05:02:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e124 e124: 6 total, 6 up, 6 in Nov 23 05:02:29 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e124: 6 total, 6 up, 6 in Nov 23 05:02:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v253: 177 pgs: 177 active+clean; 145 MiB data, 777 MiB used, 41 GiB / 42 GiB avail; 145 KiB/s rd, 25 KiB/s wr, 210 op/s Nov 23 05:02:29 localhost podman[317307]: Nov 23 05:02:29 localhost podman[317307]: 2025-11-23 10:02:29.732611931 +0000 UTC m=+0.083522457 container create 79bfe3b5060a31c485a7ada54febbe7549617df522812558e559c23ba53dbbc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118) Nov 23 05:02:29 localhost systemd[1]: Started libpod-conmon-79bfe3b5060a31c485a7ada54febbe7549617df522812558e559c23ba53dbbc5.scope. Nov 23 05:02:29 localhost systemd[1]: Started libcrun container. Nov 23 05:02:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab3ae0434086f1295791881beb5be58752b2543985ac981cdd3fd0506ec7d86/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:02:29 localhost podman[317307]: 2025-11-23 10:02:29.69334996 +0000 UTC m=+0.044260496 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:02:29 localhost podman[317307]: 2025-11-23 10:02:29.799538675 +0000 UTC m=+0.150449201 container init 79bfe3b5060a31c485a7ada54febbe7549617df522812558e559c23ba53dbbc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0) Nov 23 05:02:29 localhost podman[317307]: 2025-11-23 10:02:29.807627784 +0000 UTC m=+0.158538310 container start 79bfe3b5060a31c485a7ada54febbe7549617df522812558e559c23ba53dbbc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:02:29 localhost dnsmasq[317326]: started, version 2.85 cachesize 150 Nov 23 05:02:29 localhost dnsmasq[317326]: DNS service limited to local subnets Nov 23 05:02:29 localhost dnsmasq[317326]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:02:29 localhost dnsmasq[317326]: warning: no upstream servers configured Nov 23 05:02:29 localhost dnsmasq-dhcp[317326]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 23 05:02:29 localhost dnsmasq[317326]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/addn_hosts - 0 addresses Nov 23 05:02:29 localhost dnsmasq-dhcp[317326]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/host Nov 23 05:02:29 localhost dnsmasq-dhcp[317326]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/opts Nov 23 05:02:29 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:29.887 2 INFO neutron.agent.securitygroups_rpc [None req-71451af8-2b84-4ee8-885e-297c3854d4d1 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:30 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:30.016 262301 INFO neutron.agent.dhcp.agent [None req-96436505-3cf4-4bea-9d7c-2ebf08575858 - - - - - -] DHCP configuration for ports {'630fe724-9fbc-471d-a157-4056b40f47d0'} is completed#033[00m Nov 23 05:02:30 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:30.190 262301 INFO neutron.agent.linux.ip_lib [None req-58cf90b9-f2a4-4bb1-b7b7-bfb29fb7426e - - - - - -] Device tap3aa46b01-4c cannot be used as it has no MAC address#033[00m Nov 23 05:02:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e124 do_prune osdmap full prune enabled Nov 23 05:02:30 localhost nova_compute[280939]: 2025-11-23 10:02:30.265 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:30 localhost kernel: device tap3aa46b01-4c entered promiscuous mode Nov 23 05:02:30 localhost NetworkManager[5966]: [1763892150.2731] manager: (tap3aa46b01-4c): new Generic device (/org/freedesktop/NetworkManager/Devices/39) Nov 23 05:02:30 localhost ovn_controller[153771]: 2025-11-23T10:02:30Z|00231|binding|INFO|Claiming lport 3aa46b01-4cd5-4efd-8003-a35b39bdf99e for this chassis. Nov 23 05:02:30 localhost ovn_controller[153771]: 2025-11-23T10:02:30Z|00232|binding|INFO|3aa46b01-4cd5-4efd-8003-a35b39bdf99e: Claiming unknown Nov 23 05:02:30 localhost nova_compute[280939]: 2025-11-23 10:02:30.274 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e125 e125: 6 total, 6 up, 6 in Nov 23 05:02:30 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:30.285 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23ffb5a89d5d4d8a8900ea750309030f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3efde889-cb9c-4843-8fbd-a0bbd940654a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3aa46b01-4cd5-4efd-8003-a35b39bdf99e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:30 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:30.287 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 3aa46b01-4cd5-4efd-8003-a35b39bdf99e in datapath 11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25 bound to our chassis#033[00m Nov 23 05:02:30 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:30.288 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:02:30 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:30.289 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea16d38-18ed-4318-a0eb-3cd709e4b941]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:02:30 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e125: 6 total, 6 up, 6 in Nov 23 05:02:30 localhost podman[317350]: 2025-11-23 10:02:30.305083622 +0000 UTC m=+0.119392261 container kill 79bfe3b5060a31c485a7ada54febbe7549617df522812558e559c23ba53dbbc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3) Nov 23 05:02:30 localhost dnsmasq[317326]: exiting on receipt of SIGTERM Nov 23 05:02:30 localhost systemd[1]: libpod-79bfe3b5060a31c485a7ada54febbe7549617df522812558e559c23ba53dbbc5.scope: Deactivated successfully. Nov 23 05:02:30 localhost ovn_controller[153771]: 2025-11-23T10:02:30Z|00233|binding|INFO|Setting lport 3aa46b01-4cd5-4efd-8003-a35b39bdf99e ovn-installed in OVS Nov 23 05:02:30 localhost ovn_controller[153771]: 2025-11-23T10:02:30Z|00234|binding|INFO|Setting lport 3aa46b01-4cd5-4efd-8003-a35b39bdf99e up in Southbound Nov 23 05:02:30 localhost nova_compute[280939]: 2025-11-23 10:02:30.322 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:30 localhost nova_compute[280939]: 2025-11-23 10:02:30.357 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:30 localhost podman[317368]: 2025-11-23 10:02:30.375108862 +0000 UTC m=+0.048745894 container died 79bfe3b5060a31c485a7ada54febbe7549617df522812558e559c23ba53dbbc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:02:30 localhost nova_compute[280939]: 2025-11-23 10:02:30.385 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:30 localhost podman[317368]: 2025-11-23 10:02:30.463804077 +0000 UTC m=+0.137441099 container remove 79bfe3b5060a31c485a7ada54febbe7549617df522812558e559c23ba53dbbc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:02:30 localhost systemd[1]: libpod-conmon-79bfe3b5060a31c485a7ada54febbe7549617df522812558e559c23ba53dbbc5.scope: Deactivated successfully. Nov 23 05:02:30 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:30.786 2 INFO neutron.agent.securitygroups_rpc [None req-1ba66ce5-b42f-4fad-9876-b27f14456f6a 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:30 localhost nova_compute[280939]: 2025-11-23 10:02:30.848 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:30 localhost systemd[1]: var-lib-containers-storage-overlay-eab3ae0434086f1295791881beb5be58752b2543985ac981cdd3fd0506ec7d86-merged.mount: Deactivated successfully. Nov 23 05:02:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-79bfe3b5060a31c485a7ada54febbe7549617df522812558e559c23ba53dbbc5-userdata-shm.mount: Deactivated successfully. Nov 23 05:02:31 localhost ovn_controller[153771]: 2025-11-23T10:02:31Z|00235|binding|INFO|Removing iface tap3aa46b01-4c ovn-installed in OVS Nov 23 05:02:31 localhost ovn_controller[153771]: 2025-11-23T10:02:31Z|00236|binding|INFO|Removing lport 3aa46b01-4cd5-4efd-8003-a35b39bdf99e ovn-installed in OVS Nov 23 05:02:31 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:31.021 159415 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 80066ea9-fa44-4971-8710-2b41a577331a with type ""#033[00m Nov 23 05:02:31 localhost nova_compute[280939]: 2025-11-23 10:02:31.022 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:31 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:31.023 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23ffb5a89d5d4d8a8900ea750309030f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3efde889-cb9c-4843-8fbd-a0bbd940654a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3aa46b01-4cd5-4efd-8003-a35b39bdf99e) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:31 localhost nova_compute[280939]: 2025-11-23 10:02:31.025 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:31 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:31.026 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 3aa46b01-4cd5-4efd-8003-a35b39bdf99e in datapath 11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25 unbound from our chassis#033[00m Nov 23 05:02:31 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:31.028 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:02:31 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:31.029 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b982e2-7fb6-4dcc-b429-3164972de199]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:02:31 localhost podman[317449]: Nov 23 05:02:31 localhost podman[317449]: 2025-11-23 10:02:31.211645736 +0000 UTC m=+0.090743000 container create 0aadf3915e45e10387f8c74ff1c08edaa21d4f8f7187f29b0411eb1e77170b8f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2) Nov 23 05:02:31 localhost nova_compute[280939]: 2025-11-23 10:02:31.225 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:31 localhost systemd[1]: Started libpod-conmon-0aadf3915e45e10387f8c74ff1c08edaa21d4f8f7187f29b0411eb1e77170b8f.scope. Nov 23 05:02:31 localhost podman[317449]: 2025-11-23 10:02:31.170250369 +0000 UTC m=+0.049347683 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:02:31 localhost systemd[1]: Started libcrun container. Nov 23 05:02:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840cca3e6ece2478bb097513a1c16f76dec1bc5f74ab382eec7098d7920c0231/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:02:31 localhost podman[317449]: 2025-11-23 10:02:31.286960558 +0000 UTC m=+0.166057822 container init 0aadf3915e45e10387f8c74ff1c08edaa21d4f8f7187f29b0411eb1e77170b8f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:02:31 localhost podman[317449]: 2025-11-23 10:02:31.296581325 +0000 UTC m=+0.175678589 container start 0aadf3915e45e10387f8c74ff1c08edaa21d4f8f7187f29b0411eb1e77170b8f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118) Nov 23 05:02:31 localhost dnsmasq[317476]: started, version 2.85 cachesize 150 Nov 23 05:02:31 localhost dnsmasq[317476]: DNS service limited to local subnets Nov 23 05:02:31 localhost dnsmasq[317476]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:02:31 localhost dnsmasq[317476]: warning: no upstream servers configured Nov 23 05:02:31 localhost dnsmasq-dhcp[317476]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 23 05:02:31 localhost dnsmasq[317476]: read /var/lib/neutron/dhcp/11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25/addn_hosts - 0 addresses Nov 23 05:02:31 localhost dnsmasq-dhcp[317476]: read /var/lib/neutron/dhcp/11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25/host Nov 23 05:02:31 localhost dnsmasq-dhcp[317476]: read /var/lib/neutron/dhcp/11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25/opts Nov 23 05:02:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v255: 177 pgs: 177 active+clean; 145 MiB data, 777 MiB used, 41 GiB / 42 GiB avail; 137 KiB/s rd, 24 KiB/s wr, 198 op/s Nov 23 05:02:31 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:31.447 262301 INFO neutron.agent.dhcp.agent [None req-0a551e43-32f2-4fe1-9192-ddc6bcc162fc - - - - - -] DHCP configuration for ports {'04d40266-028f-4b5e-81f0-fb65f4cfd472'} is completed#033[00m Nov 23 05:02:31 localhost dnsmasq[317476]: exiting on receipt of SIGTERM Nov 23 05:02:31 localhost podman[317504]: 2025-11-23 10:02:31.546852392 +0000 UTC m=+0.067664208 container kill 0aadf3915e45e10387f8c74ff1c08edaa21d4f8f7187f29b0411eb1e77170b8f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Nov 23 05:02:31 localhost systemd[1]: libpod-0aadf3915e45e10387f8c74ff1c08edaa21d4f8f7187f29b0411eb1e77170b8f.scope: Deactivated successfully. Nov 23 05:02:31 localhost podman[317520]: 2025-11-23 10:02:31.623694791 +0000 UTC m=+0.063269722 container died 0aadf3915e45e10387f8c74ff1c08edaa21d4f8f7187f29b0411eb1e77170b8f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Nov 23 05:02:31 localhost podman[317520]: 2025-11-23 10:02:31.660783605 +0000 UTC m=+0.100358496 container cleanup 0aadf3915e45e10387f8c74ff1c08edaa21d4f8f7187f29b0411eb1e77170b8f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118) Nov 23 05:02:31 localhost systemd[1]: libpod-conmon-0aadf3915e45e10387f8c74ff1c08edaa21d4f8f7187f29b0411eb1e77170b8f.scope: Deactivated successfully. Nov 23 05:02:31 localhost podman[317527]: 2025-11-23 10:02:31.713070327 +0000 UTC m=+0.138054017 container remove 0aadf3915e45e10387f8c74ff1c08edaa21d4f8f7187f29b0411eb1e77170b8f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-11f8dbed-b8b7-4fd5-bd1a-887cf2e37a25, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Nov 23 05:02:31 localhost nova_compute[280939]: 2025-11-23 10:02:31.766 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:31 localhost kernel: device tap3aa46b01-4c left promiscuous mode Nov 23 05:02:31 localhost nova_compute[280939]: 2025-11-23 10:02:31.778 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:31 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:31.806 262301 INFO neutron.agent.dhcp.agent [None req-6f4215a0-c0a0-433e-8b20-61e9ce8242c8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:31 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:31.806 262301 INFO neutron.agent.dhcp.agent [None req-6f4215a0-c0a0-433e-8b20-61e9ce8242c8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:31 localhost podman[317572]: Nov 23 05:02:31 localhost systemd[1]: var-lib-containers-storage-overlay-840cca3e6ece2478bb097513a1c16f76dec1bc5f74ab382eec7098d7920c0231-merged.mount: Deactivated successfully. Nov 23 05:02:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0aadf3915e45e10387f8c74ff1c08edaa21d4f8f7187f29b0411eb1e77170b8f-userdata-shm.mount: Deactivated successfully. Nov 23 05:02:31 localhost systemd[1]: run-netns-qdhcp\x2d11f8dbed\x2db8b7\x2d4fd5\x2dbd1a\x2d887cf2e37a25.mount: Deactivated successfully. Nov 23 05:02:31 localhost podman[317572]: 2025-11-23 10:02:31.898197615 +0000 UTC m=+0.073399804 container create d3d6ace269d612b5c4224dfc9a03e9a5e6025d31b08a8b6c9980fc2c18b39f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 05:02:31 localhost systemd[1]: Started libpod-conmon-d3d6ace269d612b5c4224dfc9a03e9a5e6025d31b08a8b6c9980fc2c18b39f19.scope. Nov 23 05:02:31 localhost systemd[1]: Started libcrun container. Nov 23 05:02:31 localhost podman[317572]: 2025-11-23 10:02:31.86043263 +0000 UTC m=+0.035634809 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:02:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84a7c1c7f22200a8aa2c64dd992e71561d84417a84e05c3b568d0f907c4c1f14/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:02:31 localhost podman[317572]: 2025-11-23 10:02:31.97291528 +0000 UTC m=+0.148117459 container init d3d6ace269d612b5c4224dfc9a03e9a5e6025d31b08a8b6c9980fc2c18b39f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0) Nov 23 05:02:31 localhost podman[317572]: 2025-11-23 10:02:31.983154525 +0000 UTC m=+0.158356724 container start d3d6ace269d612b5c4224dfc9a03e9a5e6025d31b08a8b6c9980fc2c18b39f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0) Nov 23 05:02:31 localhost dnsmasq[317590]: started, version 2.85 cachesize 150 Nov 23 05:02:31 localhost dnsmasq[317590]: DNS service limited to local subnets Nov 23 05:02:31 localhost dnsmasq[317590]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:02:31 localhost dnsmasq[317590]: warning: no upstream servers configured Nov 23 05:02:31 localhost dnsmasq-dhcp[317590]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Nov 23 05:02:31 localhost dnsmasq-dhcp[317590]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 23 05:02:31 localhost dnsmasq[317590]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/addn_hosts - 2 addresses Nov 23 05:02:31 localhost dnsmasq-dhcp[317590]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/host Nov 23 05:02:31 localhost dnsmasq-dhcp[317590]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/opts Nov 23 05:02:32 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:32.003 2 INFO neutron.agent.securitygroups_rpc [None req-fe8c6fcc-6e02-46f0-a4be-4363defcaae3 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:32.058 262301 INFO neutron.agent.dhcp.agent [None req-93d7d046-fe15-4237-82f7-04ec9c9f294f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:02:29Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=7a92660c-1513-4f50-ac97-ff4e00c9490d, ip_allocation=immediate, mac_address=fa:16:3e:ec:b3:96, name=tempest-PortsIpV6TestJSON-795612500, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:02:26Z, description=, dns_domain=, id=6925877c-2cbf-490a-b8b9-1074ab06740e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-2099693082, port_security_enabled=True, project_id=1e3b93ef61044aafb71b30163a32d7ac, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55317, qos_policy_id=None, revision_number=3, router:external=False, shared=False, standard_attr_id=1891, status=ACTIVE, subnets=['192006bd-e849-4aa5-9909-ba9a6aff4050', '4489306e-03db-40ae-9be0-126fa822ff0f'], tags=[], tenant_id=1e3b93ef61044aafb71b30163a32d7ac, updated_at=2025-11-23T10:02:28Z, vlan_transparent=None, network_id=6925877c-2cbf-490a-b8b9-1074ab06740e, port_security_enabled=True, project_id=1e3b93ef61044aafb71b30163a32d7ac, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d7ead8f7-80d5-4103-ab91-19b87956485a'], standard_attr_id=1912, status=DOWN, tags=[], tenant_id=1e3b93ef61044aafb71b30163a32d7ac, updated_at=2025-11-23T10:02:29Z on network 6925877c-2cbf-490a-b8b9-1074ab06740e#033[00m Nov 23 05:02:32 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:32.109 2 INFO neutron.agent.securitygroups_rpc [None req-5d5ccae7-720c-44b0-bb02-9faf01da722a 268d02d4288b4ff3a9dab77419bcf96a 23ffb5a89d5d4d8a8900ea750309030f - - default default] Security group member updated ['8eb14703-b106-4f91-b864-8b16a806bee3']#033[00m Nov 23 05:02:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:32.133 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:32 localhost nova_compute[280939]: 2025-11-23 10:02:32.210 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:32 localhost dnsmasq[317590]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/addn_hosts - 2 addresses Nov 23 05:02:32 localhost dnsmasq-dhcp[317590]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/host Nov 23 05:02:32 localhost dnsmasq-dhcp[317590]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/opts Nov 23 05:02:32 localhost podman[317608]: 2025-11-23 10:02:32.272231908 +0000 UTC m=+0.072089913 container kill d3d6ace269d612b5c4224dfc9a03e9a5e6025d31b08a8b6c9980fc2c18b39f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118) Nov 23 05:02:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:32.281 262301 INFO neutron.agent.dhcp.agent [None req-8f2c8966-cf81-478d-b6ad-422be4460a92 - - - - - -] DHCP configuration for ports {'7a92660c-1513-4f50-ac97-ff4e00c9490d', 'a17b8c21-c13b-4eef-a191-b4be750a21e3', '630fe724-9fbc-471d-a157-4056b40f47d0'} is completed#033[00m Nov 23 05:02:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e125 do_prune osdmap full prune enabled Nov 23 05:02:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e126 e126: 6 total, 6 up, 6 in Nov 23 05:02:32 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e126: 6 total, 6 up, 6 in Nov 23 05:02:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:32.453 262301 INFO neutron.agent.dhcp.agent [None req-93d7d046-fe15-4237-82f7-04ec9c9f294f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:02:29Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7a92660c-1513-4f50-ac97-ff4e00c9490d, ip_allocation=immediate, mac_address=fa:16:3e:ec:b3:96, name=tempest-PortsIpV6TestJSON-795612500, network_id=6925877c-2cbf-490a-b8b9-1074ab06740e, port_security_enabled=True, project_id=1e3b93ef61044aafb71b30163a32d7ac, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['d7ead8f7-80d5-4103-ab91-19b87956485a'], standard_attr_id=1912, status=DOWN, tags=[], tenant_id=1e3b93ef61044aafb71b30163a32d7ac, updated_at=2025-11-23T10:02:30Z on network 6925877c-2cbf-490a-b8b9-1074ab06740e#033[00m Nov 23 05:02:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:32.538 262301 INFO neutron.agent.dhcp.agent [None req-eefc3fd5-5b54-4935-abd8-325539910756 - - - - - -] DHCP configuration for ports {'7a92660c-1513-4f50-ac97-ff4e00c9490d'} is completed#033[00m Nov 23 05:02:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:32.644 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:32 localhost dnsmasq[317590]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/addn_hosts - 1 addresses Nov 23 05:02:32 localhost dnsmasq-dhcp[317590]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/host Nov 23 05:02:32 localhost dnsmasq-dhcp[317590]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/opts Nov 23 05:02:32 localhost podman[317647]: 2025-11-23 10:02:32.650419989 +0000 UTC m=+0.065268193 container kill d3d6ace269d612b5c4224dfc9a03e9a5e6025d31b08a8b6c9980fc2c18b39f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:02:32 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:32.692 2 INFO neutron.agent.securitygroups_rpc [None req-07562697-3af0-478b-ab22-4ac8b6836e01 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:32.801 262301 INFO neutron.agent.dhcp.agent [None req-93d7d046-fe15-4237-82f7-04ec9c9f294f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:02:29Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=7a92660c-1513-4f50-ac97-ff4e00c9490d, ip_allocation=immediate, mac_address=fa:16:3e:ec:b3:96, name=tempest-PortsIpV6TestJSON-795612500, network_id=6925877c-2cbf-490a-b8b9-1074ab06740e, port_security_enabled=True, project_id=1e3b93ef61044aafb71b30163a32d7ac, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['d7ead8f7-80d5-4103-ab91-19b87956485a'], standard_attr_id=1912, status=DOWN, tags=[], tenant_id=1e3b93ef61044aafb71b30163a32d7ac, updated_at=2025-11-23T10:02:31Z on network 6925877c-2cbf-490a-b8b9-1074ab06740e#033[00m Nov 23 05:02:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:32.907 262301 INFO neutron.agent.dhcp.agent [None req-3fd4e64d-1cc3-478b-ae52-04b774c3d310 - - - - - -] DHCP configuration for ports {'7a92660c-1513-4f50-ac97-ff4e00c9490d'} is completed#033[00m Nov 23 05:02:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:02:33 localhost dnsmasq[317590]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/addn_hosts - 2 addresses Nov 23 05:02:33 localhost dnsmasq-dhcp[317590]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/host Nov 23 05:02:33 localhost podman[317685]: 2025-11-23 10:02:33.006081866 +0000 UTC m=+0.069723130 container kill d3d6ace269d612b5c4224dfc9a03e9a5e6025d31b08a8b6c9980fc2c18b39f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Nov 23 05:02:33 localhost dnsmasq-dhcp[317590]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/opts Nov 23 05:02:33 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:33.270 262301 INFO neutron.agent.dhcp.agent [None req-0666eb6f-47e1-42f7-b003-a09f2dd2a70f - - - - - -] DHCP configuration for ports {'7a92660c-1513-4f50-ac97-ff4e00c9490d'} is completed#033[00m Nov 23 05:02:33 localhost dnsmasq[317590]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/addn_hosts - 0 addresses Nov 23 05:02:33 localhost dnsmasq-dhcp[317590]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/host Nov 23 05:02:33 localhost dnsmasq-dhcp[317590]: read /var/lib/neutron/dhcp/6925877c-2cbf-490a-b8b9-1074ab06740e/opts Nov 23 05:02:33 localhost podman[317724]: 2025-11-23 10:02:33.305458657 +0000 UTC m=+0.061165437 container kill d3d6ace269d612b5c4224dfc9a03e9a5e6025d31b08a8b6c9980fc2c18b39f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Nov 23 05:02:33 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:33.359 2 INFO neutron.agent.securitygroups_rpc [None req-f8a04e39-0876-44f3-a37b-8d86b234eb13 268d02d4288b4ff3a9dab77419bcf96a 23ffb5a89d5d4d8a8900ea750309030f - - default default] Security group member updated ['8eb14703-b106-4f91-b864-8b16a806bee3']#033[00m Nov 23 05:02:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v257: 177 pgs: 177 active+clean; 145 MiB data, 777 MiB used, 41 GiB / 42 GiB avail; 106 KiB/s rd, 18 KiB/s wr, 152 op/s Nov 23 05:02:33 localhost dnsmasq[317590]: exiting on receipt of SIGTERM Nov 23 05:02:33 localhost podman[317762]: 2025-11-23 10:02:33.716597504 +0000 UTC m=+0.058692730 container kill d3d6ace269d612b5c4224dfc9a03e9a5e6025d31b08a8b6c9980fc2c18b39f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:02:33 localhost systemd[1]: libpod-d3d6ace269d612b5c4224dfc9a03e9a5e6025d31b08a8b6c9980fc2c18b39f19.scope: Deactivated successfully. Nov 23 05:02:33 localhost podman[317774]: 2025-11-23 10:02:33.784750436 +0000 UTC m=+0.056656168 container died d3d6ace269d612b5c4224dfc9a03e9a5e6025d31b08a8b6c9980fc2c18b39f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 05:02:33 localhost podman[317774]: 2025-11-23 10:02:33.81762904 +0000 UTC m=+0.089534722 container cleanup d3d6ace269d612b5c4224dfc9a03e9a5e6025d31b08a8b6c9980fc2c18b39f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:02:33 localhost systemd[1]: libpod-conmon-d3d6ace269d612b5c4224dfc9a03e9a5e6025d31b08a8b6c9980fc2c18b39f19.scope: Deactivated successfully. Nov 23 05:02:33 localhost podman[317781]: 2025-11-23 10:02:33.865738163 +0000 UTC m=+0.125769979 container remove d3d6ace269d612b5c4224dfc9a03e9a5e6025d31b08a8b6c9980fc2c18b39f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6925877c-2cbf-490a-b8b9-1074ab06740e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 05:02:33 localhost nova_compute[280939]: 2025-11-23 10:02:33.879 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:33 localhost ovn_controller[153771]: 2025-11-23T10:02:33Z|00237|binding|INFO|Releasing lport a17b8c21-c13b-4eef-a191-b4be750a21e3 from this chassis (sb_readonly=0) Nov 23 05:02:33 localhost kernel: device tapa17b8c21-c1 left promiscuous mode Nov 23 05:02:33 localhost ovn_controller[153771]: 2025-11-23T10:02:33Z|00238|binding|INFO|Setting lport a17b8c21-c13b-4eef-a191-b4be750a21e3 down in Southbound Nov 23 05:02:33 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:33.887 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::2/64 2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-6925877c-2cbf-490a-b8b9-1074ab06740e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6925877c-2cbf-490a-b8b9-1074ab06740e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e3b93ef61044aafb71b30163a32d7ac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=016f4e10-cb0b-4ae6-aff5-9e5b35ebc1dd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a17b8c21-c13b-4eef-a191-b4be750a21e3) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:33 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:33.889 159415 INFO neutron.agent.ovn.metadata.agent [-] Port a17b8c21-c13b-4eef-a191-b4be750a21e3 in datapath 6925877c-2cbf-490a-b8b9-1074ab06740e unbound from our chassis#033[00m Nov 23 05:02:33 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:33.891 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6925877c-2cbf-490a-b8b9-1074ab06740e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:02:33 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:33.892 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[5593404e-50fb-458d-8b50-ae3291bc9428]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:02:33 localhost systemd[1]: var-lib-containers-storage-overlay-84a7c1c7f22200a8aa2c64dd992e71561d84417a84e05c3b568d0f907c4c1f14-merged.mount: Deactivated successfully. Nov 23 05:02:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d3d6ace269d612b5c4224dfc9a03e9a5e6025d31b08a8b6c9980fc2c18b39f19-userdata-shm.mount: Deactivated successfully. Nov 23 05:02:33 localhost nova_compute[280939]: 2025-11-23 10:02:33.904 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e126 do_prune osdmap full prune enabled Nov 23 05:02:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e127 e127: 6 total, 6 up, 6 in Nov 23 05:02:33 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e127: 6 total, 6 up, 6 in Nov 23 05:02:34 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:34.097 262301 INFO neutron.agent.dhcp.agent [None req-f7a06701-9761-4c34-95a2-69e4d85cc3bd - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:34 localhost systemd[1]: run-netns-qdhcp\x2d6925877c\x2d2cbf\x2d490a\x2db8b9\x2d1074ab06740e.mount: Deactivated successfully. Nov 23 05:02:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:02:34 localhost podman[317806]: 2025-11-23 10:02:34.213693662 +0000 UTC m=+0.086337163 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible) Nov 23 05:02:34 localhost podman[317806]: 2025-11-23 10:02:34.22758923 +0000 UTC m=+0.100232711 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, maintainer=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, distribution-scope=public, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Nov 23 05:02:34 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:02:34 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:34.440 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:34 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:34.816 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e127 do_prune osdmap full prune enabled Nov 23 05:02:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e128 e128: 6 total, 6 up, 6 in Nov 23 05:02:34 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e128: 6 total, 6 up, 6 in Nov 23 05:02:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v260: 177 pgs: 177 active+clean; 145 MiB data, 781 MiB used, 41 GiB / 42 GiB avail; 130 KiB/s rd, 11 KiB/s wr, 176 op/s Nov 23 05:02:35 localhost nova_compute[280939]: 2025-11-23 10:02:35.461 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:35 localhost nova_compute[280939]: 2025-11-23 10:02:35.849 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:36 localhost openstack_network_exporter[241732]: ERROR 10:02:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:02:36 localhost openstack_network_exporter[241732]: ERROR 10:02:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:02:36 localhost openstack_network_exporter[241732]: ERROR 10:02:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:02:36 localhost openstack_network_exporter[241732]: ERROR 10:02:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:02:36 localhost openstack_network_exporter[241732]: Nov 23 05:02:36 localhost openstack_network_exporter[241732]: ERROR 10:02:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:02:36 localhost openstack_network_exporter[241732]: Nov 23 05:02:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e128 do_prune osdmap full prune enabled Nov 23 05:02:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e129 e129: 6 total, 6 up, 6 in Nov 23 05:02:37 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e129: 6 total, 6 up, 6 in Nov 23 05:02:37 localhost nova_compute[280939]: 2025-11-23 10:02:37.212 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v262: 177 pgs: 177 active+clean; 145 MiB data, 781 MiB used, 41 GiB / 42 GiB avail; 131 KiB/s rd, 11 KiB/s wr, 177 op/s Nov 23 05:02:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:02:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e129 do_prune osdmap full prune enabled Nov 23 05:02:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e130 e130: 6 total, 6 up, 6 in Nov 23 05:02:37 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e130: 6 total, 6 up, 6 in Nov 23 05:02:38 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:38.090 2 INFO neutron.agent.securitygroups_rpc [None req-5e89ea53-7b2c-469d-be9d-51deeecc82b0 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:02:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:02:38 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:38.316 262301 INFO neutron.agent.linux.ip_lib [None req-8f8e00ae-5573-4eab-9254-a3b99c9e61ef - - - - - -] Device tap65652c56-ae cannot be used as it has no MAC address#033[00m Nov 23 05:02:38 localhost nova_compute[280939]: 2025-11-23 10:02:38.426 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:38 localhost kernel: device tap65652c56-ae entered promiscuous mode Nov 23 05:02:38 localhost ovn_controller[153771]: 2025-11-23T10:02:38Z|00239|binding|INFO|Claiming lport 65652c56-ae7d-4bd0-a05e-8c74f6b832bf for this chassis. Nov 23 05:02:38 localhost ovn_controller[153771]: 2025-11-23T10:02:38Z|00240|binding|INFO|65652c56-ae7d-4bd0-a05e-8c74f6b832bf: Claiming unknown Nov 23 05:02:38 localhost nova_compute[280939]: 2025-11-23 10:02:38.431 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:38 localhost nova_compute[280939]: 2025-11-23 10:02:38.435 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:38 localhost NetworkManager[5966]: [1763892158.4344] manager: (tap65652c56-ae): new Generic device (/org/freedesktop/NetworkManager/Devices/40) Nov 23 05:02:38 localhost systemd-udevd[317888]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:02:38 localhost podman[317829]: 2025-11-23 10:02:38.445154045 +0000 UTC m=+0.195327794 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 23 05:02:38 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:38.453 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-d085a34f-0a46-424d-a213-3ee963e0503c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d085a34f-0a46-424d-a213-3ee963e0503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e3b93ef61044aafb71b30163a32d7ac', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9a46072-a0ae-4011-8d0b-a6afd63ba28f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=65652c56-ae7d-4bd0-a05e-8c74f6b832bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:38 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:38.455 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 65652c56-ae7d-4bd0-a05e-8c74f6b832bf in datapath d085a34f-0a46-424d-a213-3ee963e0503c bound to our chassis#033[00m Nov 23 05:02:38 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:38.457 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d085a34f-0a46-424d-a213-3ee963e0503c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:02:38 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:38.460 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[7e9b669e-1db0-4e0a-a9ff-4f38d862a732]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:02:38 localhost nova_compute[280939]: 2025-11-23 10:02:38.462 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:38 localhost ovn_controller[153771]: 2025-11-23T10:02:38Z|00241|binding|INFO|Setting lport 65652c56-ae7d-4bd0-a05e-8c74f6b832bf ovn-installed in OVS Nov 23 05:02:38 localhost ovn_controller[153771]: 2025-11-23T10:02:38Z|00242|binding|INFO|Setting lport 65652c56-ae7d-4bd0-a05e-8c74f6b832bf up in Southbound Nov 23 05:02:38 localhost nova_compute[280939]: 2025-11-23 10:02:38.464 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:38 localhost podman[317829]: 2025-11-23 10:02:38.478303688 +0000 UTC m=+0.228477427 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm) Nov 23 05:02:38 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:02:38 localhost podman[317830]: 2025-11-23 10:02:38.448074565 +0000 UTC m=+0.189454753 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 05:02:38 localhost nova_compute[280939]: 2025-11-23 10:02:38.503 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:38 localhost nova_compute[280939]: 2025-11-23 10:02:38.521 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:38 localhost podman[317830]: 2025-11-23 10:02:38.5302981 +0000 UTC m=+0.271678258 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 05:02:38 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:02:38 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:38.657 2 INFO neutron.agent.securitygroups_rpc [None req-64ec2b21-8187-4ac4-a9ab-6a7a717b7a77 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e130 do_prune osdmap full prune enabled Nov 23 05:02:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e131 e131: 6 total, 6 up, 6 in Nov 23 05:02:39 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e131: 6 total, 6 up, 6 in Nov 23 05:02:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 05:02:39 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 05:02:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 05:02:39 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:02:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 05:02:39 localhost podman[317999]: Nov 23 05:02:39 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:02:39 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev 18bf8335-8125-4afd-a9f5-563456fd0d5b (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:02:39 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev 18bf8335-8125-4afd-a9f5-563456fd0d5b (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:02:39 localhost ceph-mgr[286671]: [progress INFO root] Completed event 18bf8335-8125-4afd-a9f5-563456fd0d5b (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 05:02:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 05:02:39 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 05:02:39 localhost podman[317999]: 2025-11-23 10:02:39.344190497 +0000 UTC m=+0.106379122 container create a71989f2decc686a549952d9f2384c4e9f6c7db1427592adb90d07c0ac13b430 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d085a34f-0a46-424d-a213-3ee963e0503c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0) Nov 23 05:02:39 localhost systemd[1]: Started libpod-conmon-a71989f2decc686a549952d9f2384c4e9f6c7db1427592adb90d07c0ac13b430.scope. Nov 23 05:02:39 localhost podman[317999]: 2025-11-23 10:02:39.293295087 +0000 UTC m=+0.055483732 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:02:39 localhost systemd[1]: Started libcrun container. Nov 23 05:02:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e7eeb8872da059f0d39e8932706b5b662bfc333212f2a1b68462a51815002e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:02:39 localhost podman[317999]: 2025-11-23 10:02:39.417233109 +0000 UTC m=+0.179421694 container init a71989f2decc686a549952d9f2384c4e9f6c7db1427592adb90d07c0ac13b430 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d085a34f-0a46-424d-a213-3ee963e0503c, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 05:02:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v265: 177 pgs: 177 active+clean; 441 MiB data, 1.5 GiB used, 40 GiB / 42 GiB avail; 216 KiB/s rd, 66 MiB/s wr, 333 op/s Nov 23 05:02:39 localhost podman[317999]: 2025-11-23 10:02:39.433960595 +0000 UTC m=+0.196149190 container start a71989f2decc686a549952d9f2384c4e9f6c7db1427592adb90d07c0ac13b430 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d085a34f-0a46-424d-a213-3ee963e0503c, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3) Nov 23 05:02:39 localhost dnsmasq[318034]: started, version 2.85 cachesize 150 Nov 23 05:02:39 localhost dnsmasq[318034]: DNS service limited to local subnets Nov 23 05:02:39 localhost dnsmasq[318034]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:02:39 localhost dnsmasq[318034]: warning: no upstream servers configured Nov 23 05:02:39 localhost dnsmasq-dhcp[318034]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 23 05:02:39 localhost dnsmasq[318034]: read /var/lib/neutron/dhcp/d085a34f-0a46-424d-a213-3ee963e0503c/addn_hosts - 0 addresses Nov 23 05:02:39 localhost dnsmasq-dhcp[318034]: read /var/lib/neutron/dhcp/d085a34f-0a46-424d-a213-3ee963e0503c/host Nov 23 05:02:39 localhost dnsmasq-dhcp[318034]: read /var/lib/neutron/dhcp/d085a34f-0a46-424d-a213-3ee963e0503c/opts Nov 23 05:02:39 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:39.499 262301 INFO neutron.agent.dhcp.agent [None req-8f8e00ae-5573-4eab-9254-a3b99c9e61ef - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:02:37Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=93e1d812-b8c1-41c3-bf20-3b2835c30ae2, ip_allocation=immediate, mac_address=fa:16:3e:a5:86:26, name=tempest-PortsIpV6TestJSON-252546865, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:02:35Z, description=, dns_domain=, id=d085a34f-0a46-424d-a213-3ee963e0503c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-698169917, port_security_enabled=True, project_id=1e3b93ef61044aafb71b30163a32d7ac, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=62317, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1937, status=ACTIVE, subnets=['b834af72-26eb-46e0-a451-4df2d1f57d57'], tags=[], tenant_id=1e3b93ef61044aafb71b30163a32d7ac, updated_at=2025-11-23T10:02:37Z, vlan_transparent=None, network_id=d085a34f-0a46-424d-a213-3ee963e0503c, port_security_enabled=True, project_id=1e3b93ef61044aafb71b30163a32d7ac, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d7ead8f7-80d5-4103-ab91-19b87956485a'], standard_attr_id=1944, status=DOWN, tags=[], tenant_id=1e3b93ef61044aafb71b30163a32d7ac, updated_at=2025-11-23T10:02:37Z on network d085a34f-0a46-424d-a213-3ee963e0503c#033[00m Nov 23 05:02:39 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:39.590 262301 INFO neutron.agent.dhcp.agent [None req-8cf01689-9521-4728-9480-8f805a7e597d - - - - - -] DHCP configuration for ports {'f6a734b3-b962-4fd2-843a-b791a9f2c633'} is completed#033[00m Nov 23 05:02:39 localhost podman[318054]: 2025-11-23 10:02:39.700293017 +0000 UTC m=+0.067661037 container kill a71989f2decc686a549952d9f2384c4e9f6c7db1427592adb90d07c0ac13b430 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d085a34f-0a46-424d-a213-3ee963e0503c, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:02:39 localhost dnsmasq[318034]: read /var/lib/neutron/dhcp/d085a34f-0a46-424d-a213-3ee963e0503c/addn_hosts - 1 addresses Nov 23 05:02:39 localhost dnsmasq-dhcp[318034]: read /var/lib/neutron/dhcp/d085a34f-0a46-424d-a213-3ee963e0503c/host Nov 23 05:02:39 localhost dnsmasq-dhcp[318034]: read /var/lib/neutron/dhcp/d085a34f-0a46-424d-a213-3ee963e0503c/opts Nov 23 05:02:39 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:39.845 262301 INFO neutron.agent.dhcp.agent [None req-8f8e00ae-5573-4eab-9254-a3b99c9e61ef - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:02:38Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b5dcf510-5242-4e9f-9eee-dc9881b585d3, ip_allocation=immediate, mac_address=fa:16:3e:32:dd:f8, name=tempest-PortsIpV6TestJSON-1849801911, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:02:35Z, description=, dns_domain=, id=d085a34f-0a46-424d-a213-3ee963e0503c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-698169917, port_security_enabled=True, project_id=1e3b93ef61044aafb71b30163a32d7ac, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=62317, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1937, status=ACTIVE, subnets=['b834af72-26eb-46e0-a451-4df2d1f57d57'], tags=[], tenant_id=1e3b93ef61044aafb71b30163a32d7ac, updated_at=2025-11-23T10:02:37Z, vlan_transparent=None, network_id=d085a34f-0a46-424d-a213-3ee963e0503c, port_security_enabled=True, project_id=1e3b93ef61044aafb71b30163a32d7ac, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d7ead8f7-80d5-4103-ab91-19b87956485a'], standard_attr_id=1945, status=DOWN, tags=[], tenant_id=1e3b93ef61044aafb71b30163a32d7ac, updated_at=2025-11-23T10:02:38Z on network d085a34f-0a46-424d-a213-3ee963e0503c#033[00m Nov 23 05:02:39 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:39.970 262301 INFO neutron.agent.dhcp.agent [None req-58bf847e-9aa9-467a-a140-844ccaaa3e6e - - - - - -] DHCP configuration for ports {'93e1d812-b8c1-41c3-bf20-3b2835c30ae2'} is completed#033[00m Nov 23 05:02:40 localhost dnsmasq[318034]: read /var/lib/neutron/dhcp/d085a34f-0a46-424d-a213-3ee963e0503c/addn_hosts - 2 addresses Nov 23 05:02:40 localhost dnsmasq-dhcp[318034]: read /var/lib/neutron/dhcp/d085a34f-0a46-424d-a213-3ee963e0503c/host Nov 23 05:02:40 localhost podman[318093]: 2025-11-23 10:02:40.05034184 +0000 UTC m=+0.060446425 container kill a71989f2decc686a549952d9f2384c4e9f6c7db1427592adb90d07c0ac13b430 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d085a34f-0a46-424d-a213-3ee963e0503c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:02:40 localhost dnsmasq-dhcp[318034]: read /var/lib/neutron/dhcp/d085a34f-0a46-424d-a213-3ee963e0503c/opts Nov 23 05:02:40 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:02:40 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:02:40 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:40.334 262301 INFO neutron.agent.dhcp.agent [None req-5f53e100-09a9-4add-afc6-189c16cd8213 - - - - - -] DHCP configuration for ports {'b5dcf510-5242-4e9f-9eee-dc9881b585d3'} is completed#033[00m Nov 23 05:02:40 localhost nova_compute[280939]: 2025-11-23 10:02:40.882 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:40 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:40.887 2 INFO neutron.agent.securitygroups_rpc [None req-abfc9448-e887-481e-8dd9-d14e7060c10b 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:41 localhost dnsmasq[318034]: read /var/lib/neutron/dhcp/d085a34f-0a46-424d-a213-3ee963e0503c/addn_hosts - 1 addresses Nov 23 05:02:41 localhost dnsmasq-dhcp[318034]: read /var/lib/neutron/dhcp/d085a34f-0a46-424d-a213-3ee963e0503c/host Nov 23 05:02:41 localhost dnsmasq-dhcp[318034]: read /var/lib/neutron/dhcp/d085a34f-0a46-424d-a213-3ee963e0503c/opts Nov 23 05:02:41 localhost podman[318133]: 2025-11-23 10:02:41.131158367 +0000 UTC m=+0.066283495 container kill a71989f2decc686a549952d9f2384c4e9f6c7db1427592adb90d07c0ac13b430 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d085a34f-0a46-424d-a213-3ee963e0503c, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:02:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e131 do_prune osdmap full prune enabled Nov 23 05:02:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e132 e132: 6 total, 6 up, 6 in Nov 23 05:02:41 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e132: 6 total, 6 up, 6 in Nov 23 05:02:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v267: 177 pgs: 177 active+clean; 441 MiB data, 1.5 GiB used, 40 GiB / 42 GiB avail; 220 KiB/s rd, 67 MiB/s wr, 340 op/s Nov 23 05:02:41 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:41.822 2 INFO neutron.agent.securitygroups_rpc [None req-1e146831-4c05-4284-9c5f-ec90349e3dfc 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:42 localhost dnsmasq[318034]: read /var/lib/neutron/dhcp/d085a34f-0a46-424d-a213-3ee963e0503c/addn_hosts - 0 addresses Nov 23 05:02:42 localhost dnsmasq-dhcp[318034]: read /var/lib/neutron/dhcp/d085a34f-0a46-424d-a213-3ee963e0503c/host Nov 23 05:02:42 localhost dnsmasq-dhcp[318034]: read /var/lib/neutron/dhcp/d085a34f-0a46-424d-a213-3ee963e0503c/opts Nov 23 05:02:42 localhost podman[318171]: 2025-11-23 10:02:42.061108481 +0000 UTC m=+0.067785161 container kill a71989f2decc686a549952d9f2384c4e9f6c7db1427592adb90d07c0ac13b430 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d085a34f-0a46-424d-a213-3ee963e0503c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 23 05:02:42 localhost nova_compute[280939]: 2025-11-23 10:02:42.252 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:42 localhost systemd[1]: tmp-crun.ll1Pob.mount: Deactivated successfully. Nov 23 05:02:42 localhost dnsmasq[318034]: exiting on receipt of SIGTERM Nov 23 05:02:42 localhost podman[318208]: 2025-11-23 10:02:42.63380547 +0000 UTC m=+0.066402429 container kill a71989f2decc686a549952d9f2384c4e9f6c7db1427592adb90d07c0ac13b430 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d085a34f-0a46-424d-a213-3ee963e0503c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 23 05:02:42 localhost systemd[1]: libpod-a71989f2decc686a549952d9f2384c4e9f6c7db1427592adb90d07c0ac13b430.scope: Deactivated successfully. Nov 23 05:02:42 localhost podman[318222]: 2025-11-23 10:02:42.713840108 +0000 UTC m=+0.056341308 container died a71989f2decc686a549952d9f2384c4e9f6c7db1427592adb90d07c0ac13b430 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d085a34f-0a46-424d-a213-3ee963e0503c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 05:02:42 localhost podman[318222]: 2025-11-23 10:02:42.778353927 +0000 UTC m=+0.120855177 container remove a71989f2decc686a549952d9f2384c4e9f6c7db1427592adb90d07c0ac13b430 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d085a34f-0a46-424d-a213-3ee963e0503c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:02:42 localhost systemd[1]: libpod-conmon-a71989f2decc686a549952d9f2384c4e9f6c7db1427592adb90d07c0ac13b430.scope: Deactivated successfully. Nov 23 05:02:42 localhost ovn_controller[153771]: 2025-11-23T10:02:42Z|00243|binding|INFO|Releasing lport 65652c56-ae7d-4bd0-a05e-8c74f6b832bf from this chassis (sb_readonly=0) Nov 23 05:02:42 localhost kernel: device tap65652c56-ae left promiscuous mode Nov 23 05:02:42 localhost nova_compute[280939]: 2025-11-23 10:02:42.796 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:42 localhost ovn_controller[153771]: 2025-11-23T10:02:42Z|00244|binding|INFO|Setting lport 65652c56-ae7d-4bd0-a05e-8c74f6b832bf down in Southbound Nov 23 05:02:42 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:42.814 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-d085a34f-0a46-424d-a213-3ee963e0503c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d085a34f-0a46-424d-a213-3ee963e0503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e3b93ef61044aafb71b30163a32d7ac', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9a46072-a0ae-4011-8d0b-a6afd63ba28f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=65652c56-ae7d-4bd0-a05e-8c74f6b832bf) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:42 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:42.816 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 65652c56-ae7d-4bd0-a05e-8c74f6b832bf in datapath d085a34f-0a46-424d-a213-3ee963e0503c unbound from our chassis#033[00m Nov 23 05:02:42 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:42.818 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d085a34f-0a46-424d-a213-3ee963e0503c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:02:42 localhost nova_compute[280939]: 2025-11-23 10:02:42.818 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:42 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:42.819 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[6b3cd07b-6f89-46ef-b0f4-eef9fb06b280]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:02:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:02:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e132 do_prune osdmap full prune enabled Nov 23 05:02:42 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:42.925 2 INFO neutron.agent.securitygroups_rpc [None req-1cdbc8e1-ca0b-46de-ad57-ff34c29d4e3c fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['7d97459a-8496-4621-8cc6-1521c3f526b4']#033[00m Nov 23 05:02:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e133 e133: 6 total, 6 up, 6 in Nov 23 05:02:42 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e133: 6 total, 6 up, 6 in Nov 23 05:02:43 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:43.041 262301 INFO neutron.agent.dhcp.agent [None req-8422c72f-b351-453e-8136-3a2781a47ce5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:43 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:43.043 262301 INFO neutron.agent.dhcp.agent [None req-8422c72f-b351-453e-8136-3a2781a47ce5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:43 localhost systemd[1]: var-lib-containers-storage-overlay-e0e7eeb8872da059f0d39e8932706b5b662bfc333212f2a1b68462a51815002e-merged.mount: Deactivated successfully. Nov 23 05:02:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a71989f2decc686a549952d9f2384c4e9f6c7db1427592adb90d07c0ac13b430-userdata-shm.mount: Deactivated successfully. Nov 23 05:02:43 localhost systemd[1]: run-netns-qdhcp\x2dd085a34f\x2d0a46\x2d424d\x2da213\x2d3ee963e0503c.mount: Deactivated successfully. Nov 23 05:02:43 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:43.340 2 INFO neutron.agent.securitygroups_rpc [None req-9fd2c3cf-c790-41eb-bd9f-c36dc00051cd fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['7d97459a-8496-4621-8cc6-1521c3f526b4']#033[00m Nov 23 05:02:43 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:43.380 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v269: 177 pgs: 177 active+clean; 441 MiB data, 1.5 GiB used, 40 GiB / 42 GiB avail; 176 KiB/s rd, 54 MiB/s wr, 272 op/s Nov 23 05:02:43 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 05:02:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 05:02:43 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:02:43 localhost nova_compute[280939]: 2025-11-23 10:02:43.660 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:43 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:02:44 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:44.911 2 INFO neutron.agent.securitygroups_rpc [None req-c88be53d-2b25-4bbf-93ce-69a72dd56cf0 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m Nov 23 05:02:45 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:45.223 2 INFO neutron.agent.securitygroups_rpc [None req-3637ef8e-b231-4ebc-b0ad-62e8972d9361 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m Nov 23 05:02:45 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:45.383 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:45 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:45.384 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 05:02:45 localhost nova_compute[280939]: 2025-11-23 10:02:45.413 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v270: 177 pgs: 177 active+clean; 785 MiB data, 2.6 GiB used, 39 GiB / 42 GiB avail; 269 KiB/s rd, 101 MiB/s wr, 425 op/s Nov 23 05:02:45 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:45.649 262301 INFO neutron.agent.linux.ip_lib [None req-3e36df7b-27ce-4244-81da-76423d1683fa - - - - - -] Device tap3b53fd96-87 cannot be used as it has no MAC address#033[00m Nov 23 05:02:45 localhost nova_compute[280939]: 2025-11-23 10:02:45.670 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:45 localhost kernel: device tap3b53fd96-87 entered promiscuous mode Nov 23 05:02:45 localhost nova_compute[280939]: 2025-11-23 10:02:45.680 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:45 localhost ovn_controller[153771]: 2025-11-23T10:02:45Z|00245|binding|INFO|Claiming lport 3b53fd96-8722-4b83-a91d-d3918b7d0baa for this chassis. Nov 23 05:02:45 localhost ovn_controller[153771]: 2025-11-23T10:02:45Z|00246|binding|INFO|3b53fd96-8722-4b83-a91d-d3918b7d0baa: Claiming unknown Nov 23 05:02:45 localhost NetworkManager[5966]: [1763892165.6841] manager: (tap3b53fd96-87): new Generic device (/org/freedesktop/NetworkManager/Devices/41) Nov 23 05:02:45 localhost systemd-udevd[318259]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:02:45 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:45.690 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-46c3f560-7f49-47ef-94b9-78baf8efb062', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-46c3f560-7f49-47ef-94b9-78baf8efb062', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6cc558ab5ea444ca89055d39fcd5b762', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfdc050c-6921-47f3-9dfe-6bafcffaf199, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3b53fd96-8722-4b83-a91d-d3918b7d0baa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:45 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:45.692 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 3b53fd96-8722-4b83-a91d-d3918b7d0baa in datapath 46c3f560-7f49-47ef-94b9-78baf8efb062 bound to our chassis#033[00m Nov 23 05:02:45 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:45.693 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 46c3f560-7f49-47ef-94b9-78baf8efb062 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:02:45 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:45.694 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[34d3f76c-2bf7-40fb-a021-450dca9feabe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:02:45 localhost journal[229336]: ethtool ioctl error on tap3b53fd96-87: No such device Nov 23 05:02:45 localhost journal[229336]: ethtool ioctl error on tap3b53fd96-87: No such device Nov 23 05:02:45 localhost ovn_controller[153771]: 2025-11-23T10:02:45Z|00247|binding|INFO|Setting lport 3b53fd96-8722-4b83-a91d-d3918b7d0baa ovn-installed in OVS Nov 23 05:02:45 localhost ovn_controller[153771]: 2025-11-23T10:02:45Z|00248|binding|INFO|Setting lport 3b53fd96-8722-4b83-a91d-d3918b7d0baa up in Southbound Nov 23 05:02:45 localhost nova_compute[280939]: 2025-11-23 10:02:45.716 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:45 localhost journal[229336]: ethtool ioctl error on tap3b53fd96-87: No such device Nov 23 05:02:45 localhost journal[229336]: ethtool ioctl error on tap3b53fd96-87: No such device Nov 23 05:02:45 localhost journal[229336]: ethtool ioctl error on tap3b53fd96-87: No such device Nov 23 05:02:45 localhost journal[229336]: ethtool ioctl error on tap3b53fd96-87: No such device Nov 23 05:02:45 localhost journal[229336]: ethtool ioctl error on tap3b53fd96-87: No such device Nov 23 05:02:45 localhost journal[229336]: ethtool ioctl error on tap3b53fd96-87: No such device Nov 23 05:02:45 localhost nova_compute[280939]: 2025-11-23 10:02:45.753 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:45 localhost nova_compute[280939]: 2025-11-23 10:02:45.779 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:45 localhost nova_compute[280939]: 2025-11-23 10:02:45.883 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:46 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:46.348 2 INFO neutron.agent.securitygroups_rpc [None req-c48e4a17-f990-4551-a604-7a1f5baad78f fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m Nov 23 05:02:46 localhost podman[318330]: Nov 23 05:02:46 localhost podman[318330]: 2025-11-23 10:02:46.673193321 +0000 UTC m=+0.108765425 container create 0247714e25a110104f74b95eb95b9d1b43b3643d30059313b2125934661dc0f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-46c3f560-7f49-47ef-94b9-78baf8efb062, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:02:46 localhost systemd[1]: Started libpod-conmon-0247714e25a110104f74b95eb95b9d1b43b3643d30059313b2125934661dc0f3.scope. Nov 23 05:02:46 localhost podman[318330]: 2025-11-23 10:02:46.614026796 +0000 UTC m=+0.049598910 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:02:46 localhost systemd[1]: Started libcrun container. Nov 23 05:02:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fa59b8bb6672052af1f8e26c62183df1ba38beb8a371994b2085065be321eb2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:02:46 localhost podman[318330]: 2025-11-23 10:02:46.750865386 +0000 UTC m=+0.186437520 container init 0247714e25a110104f74b95eb95b9d1b43b3643d30059313b2125934661dc0f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-46c3f560-7f49-47ef-94b9-78baf8efb062, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:02:46 localhost podman[318330]: 2025-11-23 10:02:46.759313506 +0000 UTC m=+0.194885620 container start 0247714e25a110104f74b95eb95b9d1b43b3643d30059313b2125934661dc0f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-46c3f560-7f49-47ef-94b9-78baf8efb062, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:02:46 localhost dnsmasq[318348]: started, version 2.85 cachesize 150 Nov 23 05:02:46 localhost dnsmasq[318348]: DNS service limited to local subnets Nov 23 05:02:46 localhost dnsmasq[318348]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:02:46 localhost dnsmasq[318348]: warning: no upstream servers configured Nov 23 05:02:46 localhost dnsmasq-dhcp[318348]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 23 05:02:46 localhost dnsmasq[318348]: read /var/lib/neutron/dhcp/46c3f560-7f49-47ef-94b9-78baf8efb062/addn_hosts - 0 addresses Nov 23 05:02:46 localhost dnsmasq-dhcp[318348]: read /var/lib/neutron/dhcp/46c3f560-7f49-47ef-94b9-78baf8efb062/host Nov 23 05:02:46 localhost dnsmasq-dhcp[318348]: read /var/lib/neutron/dhcp/46c3f560-7f49-47ef-94b9-78baf8efb062/opts Nov 23 05:02:46 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:46.894 2 INFO neutron.agent.securitygroups_rpc [None req-69aab5f7-414f-4919-8a4c-b4fcd82ff545 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m Nov 23 05:02:46 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:46.940 262301 INFO neutron.agent.dhcp.agent [None req-a56bac72-0ac2-405c-96b4-c454d6b0d08d - - - - - -] DHCP configuration for ports {'c685a04e-9f39-4cb4-8770-8082fccb41f9'} is completed#033[00m Nov 23 05:02:46 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:46.949 2 INFO neutron.agent.securitygroups_rpc [None req-19349ae1-ccba-4615-bdb3-c092a3aa234c 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:47 localhost podman[239764]: time="2025-11-23T10:02:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:02:47 localhost podman[239764]: @ - - [23/Nov/2025:10:02:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156316 "" "Go-http-client/1.1" Nov 23 05:02:47 localhost podman[239764]: @ - - [23/Nov/2025:10:02:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19201 "" "Go-http-client/1.1" Nov 23 05:02:47 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:47.190 2 INFO neutron.agent.securitygroups_rpc [None req-a4966cd3-a78e-4723-9b2c-0105f97058c4 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m Nov 23 05:02:47 localhost nova_compute[280939]: 2025-11-23 10:02:47.294 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v271: 177 pgs: 177 active+clean; 785 MiB data, 2.6 GiB used, 39 GiB / 42 GiB avail; 92 KiB/s rd, 43 MiB/s wr, 149 op/s Nov 23 05:02:47 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:47.548 2 INFO neutron.agent.securitygroups_rpc [None req-e2113fc4-8297-4e83-8bfc-91b8d2f12d0b fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m Nov 23 05:02:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:02:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e133 do_prune osdmap full prune enabled Nov 23 05:02:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e134 e134: 6 total, 6 up, 6 in Nov 23 05:02:47 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e134: 6 total, 6 up, 6 in Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0. Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:02:47.976635) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52 Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892167976671, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1111, "num_deletes": 259, "total_data_size": 1044601, "memory_usage": 1069200, "flush_reason": "Manual Compaction"} Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892167982794, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 813419, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28398, "largest_seqno": 29508, "table_properties": {"data_size": 808882, "index_size": 2072, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11666, "raw_average_key_size": 21, "raw_value_size": 799296, "raw_average_value_size": 1482, "num_data_blocks": 90, "num_entries": 539, "num_filter_entries": 539, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892107, "oldest_key_time": 1763892107, "file_creation_time": 1763892167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}} Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 6195 microseconds, and 2481 cpu microseconds. Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:02:47.982829) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 813419 bytes OK Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:02:47.982846) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:02:47.985654) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:02:47.985669) EVENT_LOG_v1 {"time_micros": 1763892167985665, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:02:47.985682) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1039313, prev total WAL file size 1039313, number of live WAL files 2. Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:02:47.986585) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303037' seq:72057594037927935, type:22 .. '6D6772737461740034323539' seq:0, type:0; will stop at (end) Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(794KB)], [51(15MB)] Nov 23 05:02:47 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892167986647, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 16844046, "oldest_snapshot_seqno": -1} Nov 23 05:02:48 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:48.004 2 INFO neutron.agent.securitygroups_rpc [None req-11137031-a253-41b4-b5b7-0c84953c1da3 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m Nov 23 05:02:48 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 12452 keys, 14927850 bytes, temperature: kUnknown Nov 23 05:02:48 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892168066187, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 14927850, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14860310, "index_size": 35311, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31173, "raw_key_size": 337102, "raw_average_key_size": 27, "raw_value_size": 14651319, "raw_average_value_size": 1176, "num_data_blocks": 1309, "num_entries": 12452, "num_filter_entries": 12452, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763892167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}} Nov 23 05:02:48 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:02:48 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:02:48.066611) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14927850 bytes Nov 23 05:02:48 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:02:48.069265) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.3 rd, 187.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 15.3 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(39.1) write-amplify(18.4) OK, records in: 12959, records dropped: 507 output_compression: NoCompression Nov 23 05:02:48 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:02:48.069295) EVENT_LOG_v1 {"time_micros": 1763892168069283, "job": 30, "event": "compaction_finished", "compaction_time_micros": 79735, "compaction_time_cpu_micros": 47708, "output_level": 6, "num_output_files": 1, "total_output_size": 14927850, "num_input_records": 12959, "num_output_records": 12452, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 05:02:48 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:02:48 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892168069536, "job": 30, "event": "table_file_deletion", "file_number": 53} Nov 23 05:02:48 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:02:48 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892168071885, "job": 30, "event": "table_file_deletion", "file_number": 51} Nov 23 05:02:48 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:02:47.986459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:02:48 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:02:48.071952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:02:48 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:02:48.071960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:02:48 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:02:48.071963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:02:48 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:02:48.071966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:02:48 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:02:48.071969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:02:48 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:48.638 2 INFO neutron.agent.securitygroups_rpc [None req-edba85f2-3d5d-4fd2-9504-24d71c7cf655 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m Nov 23 05:02:49 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:49.207 2 INFO neutron.agent.securitygroups_rpc [None req-d908f8b2-2302-4d45-805c-4d35b6de9b08 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m Nov 23 05:02:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 1.1 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 150 KiB/s rd, 91 MiB/s wr, 255 op/s Nov 23 05:02:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:02:49 localhost systemd[1]: tmp-crun.O716DR.mount: Deactivated successfully. Nov 23 05:02:49 localhost podman[318349]: 2025-11-23 10:02:49.904102712 +0000 UTC m=+0.088469198 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:02:49 localhost podman[318349]: 2025-11-23 10:02:49.912364367 +0000 UTC m=+0.096730883 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2) Nov 23 05:02:49 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:02:49 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:49.951 2 INFO neutron.agent.securitygroups_rpc [None req-765ea533-9294-4729-95dc-8b0b8371a481 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:02:50 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:50.129 2 INFO neutron.agent.securitygroups_rpc [None req-838089d2-b106-4cd8-a2d9-7d5c71329675 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m Nov 23 05:02:50 localhost nova_compute[280939]: 2025-11-23 10:02:50.884 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v274: 177 pgs: 177 active+clean; 1.1 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 141 KiB/s rd, 86 MiB/s wr, 240 op/s Nov 23 05:02:51 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:51.855 2 INFO neutron.agent.securitygroups_rpc [None req-8ada9fea-3f3e-487c-9d7c-c67e94466d69 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['2de10e3b-e1e6-47ac-8eeb-13eb3642fef8']#033[00m Nov 23 05:02:52 localhost nova_compute[280939]: 2025-11-23 10:02:52.331 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:52 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:02:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:02:53 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1480543529' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:02:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:02:53 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1480543529' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:02:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:02:53 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1652389115' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:02:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:02:53 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1652389115' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:02:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:02:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:02:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:02:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:02:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:02:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:02:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v275: 177 pgs: 177 active+clean; 1.1 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 120 KiB/s rd, 73 MiB/s wr, 204 op/s Nov 23 05:02:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e134 do_prune osdmap full prune enabled Nov 23 05:02:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e135 e135: 6 total, 6 up, 6 in Nov 23 05:02:54 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e135: 6 total, 6 up, 6 in Nov 23 05:02:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:02:54 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1512625466' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:02:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:02:54 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1512625466' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:02:54 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:54.309 2 INFO neutron.agent.securitygroups_rpc [None req-70f0f8e2-9250-4661-ae6f-f58e5f669345 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['dc864c5c-de53-475b-960e-083ffe4e3e6b']#033[00m Nov 23 05:02:54 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:54.386 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 05:02:55 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:55.058 2 INFO neutron.agent.securitygroups_rpc [None req-ebacb72f-43a4-4e7f-beed-d94b86f532ab fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['dc864c5c-de53-475b-960e-083ffe4e3e6b']#033[00m Nov 23 05:02:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v277: 177 pgs: 177 active+clean; 145 MiB data, 833 MiB used, 41 GiB / 42 GiB avail; 151 KiB/s rd, 48 MiB/s wr, 263 op/s Nov 23 05:02:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:02:55 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1427395697' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:02:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:02:55 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1427395697' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:02:55 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:55.744 2 INFO neutron.agent.securitygroups_rpc [None req-bb083733-91fb-4356-a91b-0dce7c35cb96 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d9259f0b-7c30-4dce-b81e-e0f698e442c7']#033[00m Nov 23 05:02:55 localhost nova_compute[280939]: 2025-11-23 10:02:55.885 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e135 do_prune osdmap full prune enabled Nov 23 05:02:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e136 e136: 6 total, 6 up, 6 in Nov 23 05:02:56 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e136: 6 total, 6 up, 6 in Nov 23 05:02:56 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:56.617 2 INFO neutron.agent.securitygroups_rpc [None req-79d36424-581f-4db8-8c43-da9cef64debb fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['2b80d24e-7954-4669-8041-3d535b2f9be2']#033[00m Nov 23 05:02:56 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:56.757 2 INFO neutron.agent.securitygroups_rpc [None req-aea3005f-d734-4d7a-a8c4-fa6242ccbee5 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['2b80d24e-7954-4669-8041-3d535b2f9be2']#033[00m Nov 23 05:02:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:02:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:02:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:02:56 localhost podman[318368]: 2025-11-23 10:02:56.900833081 +0000 UTC m=+0.088880652 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3) Nov 23 05:02:56 localhost podman[318368]: 2025-11-23 10:02:56.912290984 +0000 UTC m=+0.100338545 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:02:56 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:02:56 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:56.960 262301 INFO neutron.agent.linux.ip_lib [None req-02515bfb-7d11-48b2-85f8-e7771f5dcd1b - - - - - -] Device tap6d96723c-b0 cannot be used as it has no MAC address#033[00m Nov 23 05:02:57 localhost nova_compute[280939]: 2025-11-23 10:02:57.017 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:57 localhost kernel: device tap6d96723c-b0 entered promiscuous mode Nov 23 05:02:57 localhost NetworkManager[5966]: [1763892177.0284] manager: (tap6d96723c-b0): new Generic device (/org/freedesktop/NetworkManager/Devices/42) Nov 23 05:02:57 localhost systemd-udevd[318423]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:02:57 localhost ovn_controller[153771]: 2025-11-23T10:02:57Z|00249|binding|INFO|Claiming lport 6d96723c-b036-4397-a397-adefa76788a7 for this chassis. Nov 23 05:02:57 localhost ovn_controller[153771]: 2025-11-23T10:02:57Z|00250|binding|INFO|6d96723c-b036-4397-a397-adefa76788a7: Claiming unknown Nov 23 05:02:57 localhost nova_compute[280939]: 2025-11-23 10:02:57.036 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:57 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:57.058 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-872c7ce8-7241-43b6-8a25-cee4e338b409', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-872c7ce8-7241-43b6-8a25-cee4e338b409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d0aa997cf0e428b8c7e20c806754329', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cc2abbe-eba4-49a4-8e30-51ca63f880d2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6d96723c-b036-4397-a397-adefa76788a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:57 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:57.060 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 6d96723c-b036-4397-a397-adefa76788a7 in datapath 872c7ce8-7241-43b6-8a25-cee4e338b409 bound to our chassis#033[00m Nov 23 05:02:57 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:57.065 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 872c7ce8-7241-43b6-8a25-cee4e338b409 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:02:57 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:57.066 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[24b05ccb-953a-40e1-b63c-6886988e7727]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:02:57 localhost podman[318370]: 2025-11-23 10:02:57.068209132 +0000 UTC m=+0.247171132 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 23 05:02:57 localhost journal[229336]: ethtool ioctl error on tap6d96723c-b0: No such device Nov 23 05:02:57 localhost ovn_controller[153771]: 2025-11-23T10:02:57Z|00251|binding|INFO|Setting lport 6d96723c-b036-4397-a397-adefa76788a7 ovn-installed in OVS Nov 23 05:02:57 localhost ovn_controller[153771]: 2025-11-23T10:02:57Z|00252|binding|INFO|Setting lport 6d96723c-b036-4397-a397-adefa76788a7 up in Southbound Nov 23 05:02:57 localhost nova_compute[280939]: 2025-11-23 10:02:57.077 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:57 localhost journal[229336]: ethtool ioctl error on tap6d96723c-b0: No such device Nov 23 05:02:57 localhost podman[318369]: 2025-11-23 10:02:57.033581914 +0000 UTC m=+0.215352411 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 05:02:57 localhost journal[229336]: ethtool ioctl error on tap6d96723c-b0: No such device Nov 23 05:02:57 localhost journal[229336]: ethtool ioctl error on tap6d96723c-b0: No such device Nov 23 05:02:57 localhost journal[229336]: ethtool ioctl error on tap6d96723c-b0: No such device Nov 23 05:02:57 localhost journal[229336]: ethtool ioctl error on tap6d96723c-b0: No such device Nov 23 05:02:57 localhost journal[229336]: ethtool ioctl error on tap6d96723c-b0: No such device Nov 23 05:02:57 localhost nova_compute[280939]: 2025-11-23 10:02:57.114 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:57 localhost podman[318370]: 2025-11-23 10:02:57.115365656 +0000 UTC m=+0.294327696 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 23 05:02:57 localhost journal[229336]: ethtool ioctl error on tap6d96723c-b0: No such device Nov 23 05:02:57 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:02:57 localhost nova_compute[280939]: 2025-11-23 10:02:57.145 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:57 localhost podman[318369]: 2025-11-23 10:02:57.167663658 +0000 UTC m=+0.349434155 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 05:02:57 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:02:57 localhost nova_compute[280939]: 2025-11-23 10:02:57.334 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v279: 177 pgs: 177 active+clean; 145 MiB data, 833 MiB used, 41 GiB / 42 GiB avail; 93 KiB/s rd, 4.4 KiB/s wr, 157 op/s Nov 23 05:02:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:02:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e136 do_prune osdmap full prune enabled Nov 23 05:02:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e137 e137: 6 total, 6 up, 6 in Nov 23 05:02:57 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e137: 6 total, 6 up, 6 in Nov 23 05:02:57 localhost podman[318505]: Nov 23 05:02:57 localhost podman[318505]: 2025-11-23 10:02:57.97558376 +0000 UTC m=+0.101363477 container create 524830d268523e23e08a470eff1214a0fe4c24bc83e05c28fb9a1570f6b261db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-872c7ce8-7241-43b6-8a25-cee4e338b409, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 05:02:58 localhost systemd[1]: Started libpod-conmon-524830d268523e23e08a470eff1214a0fe4c24bc83e05c28fb9a1570f6b261db.scope. Nov 23 05:02:58 localhost podman[318505]: 2025-11-23 10:02:57.923746152 +0000 UTC m=+0.049525909 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:02:58 localhost systemd[1]: Started libcrun container. Nov 23 05:02:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65aed356b94c04e0ee5dc01534a551a7d7307456d1e7160f2b263b0b2cd2b49c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:02:58 localhost podman[318505]: 2025-11-23 10:02:58.046768795 +0000 UTC m=+0.172548512 container init 524830d268523e23e08a470eff1214a0fe4c24bc83e05c28fb9a1570f6b261db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-872c7ce8-7241-43b6-8a25-cee4e338b409, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:02:58 localhost podman[318505]: 2025-11-23 10:02:58.055104342 +0000 UTC m=+0.180884059 container start 524830d268523e23e08a470eff1214a0fe4c24bc83e05c28fb9a1570f6b261db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-872c7ce8-7241-43b6-8a25-cee4e338b409, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:02:58 localhost dnsmasq[318523]: started, version 2.85 cachesize 150 Nov 23 05:02:58 localhost dnsmasq[318523]: DNS service limited to local subnets Nov 23 05:02:58 localhost dnsmasq[318523]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:02:58 localhost dnsmasq[318523]: warning: no upstream servers configured Nov 23 05:02:58 localhost dnsmasq-dhcp[318523]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:02:58 localhost dnsmasq[318523]: read /var/lib/neutron/dhcp/872c7ce8-7241-43b6-8a25-cee4e338b409/addn_hosts - 0 addresses Nov 23 05:02:58 localhost dnsmasq-dhcp[318523]: read /var/lib/neutron/dhcp/872c7ce8-7241-43b6-8a25-cee4e338b409/host Nov 23 05:02:58 localhost dnsmasq-dhcp[318523]: read /var/lib/neutron/dhcp/872c7ce8-7241-43b6-8a25-cee4e338b409/opts Nov 23 05:02:58 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:58.251 262301 INFO neutron.agent.dhcp.agent [None req-6c2d3910-e85f-4146-9104-bdfd8d1cc5eb - - - - - -] DHCP configuration for ports {'6ecc657a-79ff-4ad6-bca4-c61a707c8bf4'} is completed#033[00m Nov 23 05:02:58 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:58.584 2 INFO neutron.agent.securitygroups_rpc [None req-f9017110-ec89-4835-91db-9aa9b814a12e 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d9259f0b-7c30-4dce-b81e-e0f698e442c7', '469976a2-fa36-45e6-842e-95bc93db1438']#033[00m Nov 23 05:02:58 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:58.763 2 INFO neutron.agent.securitygroups_rpc [None req-17703c58-f506-4a75-8387-af7c0c3c8d74 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['da2a8078-9813-4e58-b587-8e4d75d37f47']#033[00m Nov 23 05:02:59 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:59.192 2 INFO neutron.agent.securitygroups_rpc [None req-133cea40-228c-4af6-8f6b-d4d2d5a2eb51 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['da2a8078-9813-4e58-b587-8e4d75d37f47']#033[00m Nov 23 05:02:59 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:02:59.319 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:02:59 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:59.345 2 INFO neutron.agent.securitygroups_rpc [None req-75de0287-b544-4fab-ad82-848c9edeaf4f fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['da2a8078-9813-4e58-b587-8e4d75d37f47']#033[00m Nov 23 05:02:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v281: 177 pgs: 177 active+clean; 145 MiB data, 817 MiB used, 41 GiB / 42 GiB avail; 168 KiB/s rd, 8.7 KiB/s wr, 270 op/s Nov 23 05:02:59 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:59.536 2 INFO neutron.agent.securitygroups_rpc [None req-94b4dea1-0a52-41d4-90a7-1f1aefba76c5 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['469976a2-fa36-45e6-842e-95bc93db1438']#033[00m Nov 23 05:02:59 localhost ovn_controller[153771]: 2025-11-23T10:02:59Z|00253|binding|INFO|Releasing lport 6d96723c-b036-4397-a397-adefa76788a7 from this chassis (sb_readonly=0) Nov 23 05:02:59 localhost kernel: device tap6d96723c-b0 left promiscuous mode Nov 23 05:02:59 localhost ovn_controller[153771]: 2025-11-23T10:02:59Z|00254|binding|INFO|Setting lport 6d96723c-b036-4397-a397-adefa76788a7 down in Southbound Nov 23 05:02:59 localhost nova_compute[280939]: 2025-11-23 10:02:59.550 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:59 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:59.563 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-872c7ce8-7241-43b6-8a25-cee4e338b409', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-872c7ce8-7241-43b6-8a25-cee4e338b409', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d0aa997cf0e428b8c7e20c806754329', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cc2abbe-eba4-49a4-8e30-51ca63f880d2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6d96723c-b036-4397-a397-adefa76788a7) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:02:59 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:59.565 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 6d96723c-b036-4397-a397-adefa76788a7 in datapath 872c7ce8-7241-43b6-8a25-cee4e338b409 unbound from our chassis#033[00m Nov 23 05:02:59 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:59.572 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 872c7ce8-7241-43b6-8a25-cee4e338b409, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:02:59 localhost ovn_metadata_agent[159410]: 2025-11-23 10:02:59.573 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[5206f506-3173-494b-a5a8-6455bd6cb56a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:02:59 localhost nova_compute[280939]: 2025-11-23 10:02:59.578 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:02:59 localhost neutron_sriov_agent[255165]: 2025-11-23 10:02:59.730 2 INFO neutron.agent.securitygroups_rpc [None req-d3ac2373-b722-49eb-839d-a87ede7d08ac fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['da2a8078-9813-4e58-b587-8e4d75d37f47']#033[00m Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.141 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:03:00 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1122561443' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:03:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:03:00 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1122561443' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:03:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e137 do_prune osdmap full prune enabled Nov 23 05:03:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e138 e138: 6 total, 6 up, 6 in Nov 23 05:03:00 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e138: 6 total, 6 up, 6 in Nov 23 05:03:00 localhost systemd[1]: tmp-crun.nYGFpj.mount: Deactivated successfully. Nov 23 05:03:00 localhost podman[318543]: 2025-11-23 10:03:00.541497769 +0000 UTC m=+0.068582176 container kill 524830d268523e23e08a470eff1214a0fe4c24bc83e05c28fb9a1570f6b261db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-872c7ce8-7241-43b6-8a25-cee4e338b409, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:03:00 localhost dnsmasq[318523]: read /var/lib/neutron/dhcp/872c7ce8-7241-43b6-8a25-cee4e338b409/addn_hosts - 0 addresses Nov 23 05:03:00 localhost dnsmasq-dhcp[318523]: read /var/lib/neutron/dhcp/872c7ce8-7241-43b6-8a25-cee4e338b409/host Nov 23 05:03:00 localhost dnsmasq-dhcp[318523]: read /var/lib/neutron/dhcp/872c7ce8-7241-43b6-8a25-cee4e338b409/opts Nov 23 05:03:00 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:00.559 2 INFO neutron.agent.securitygroups_rpc [None req-6ed9d977-5486-448d-8f02-a426fdb94759 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['da2a8078-9813-4e58-b587-8e4d75d37f47']#033[00m Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent [None req-369e27a5-2c4b-4043-aa38-a6457040a9d0 - - - - - -] Unable to reload_allocations dhcp for 872c7ce8-7241-43b6-8a25-cee4e338b409.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap6d96723c-b0 not found in namespace qdhcp-872c7ce8-7241-43b6-8a25-cee4e338b409. Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent return fut.result() Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent return self.__get_result() Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent raise self._exception Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap6d96723c-b0 not found in namespace qdhcp-872c7ce8-7241-43b6-8a25-cee4e338b409. Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.569 262301 ERROR neutron.agent.dhcp.agent #033[00m Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.575 262301 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.792 262301 INFO neutron.agent.dhcp.agent [None req-3f0c2123-1237-4583-86c1-f160840f36b3 - - - - - -] All active networks have been fetched through RPC.#033[00m Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.793 262301 INFO neutron.agent.dhcp.agent [-] Starting network a023776d-3b12-49cd-b7a4-1760b46e8209 dhcp configuration#033[00m Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.794 262301 INFO neutron.agent.dhcp.agent [-] Finished network a023776d-3b12-49cd-b7a4-1760b46e8209 dhcp configuration#033[00m Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.794 262301 INFO neutron.agent.dhcp.agent [-] Starting network 872c7ce8-7241-43b6-8a25-cee4e338b409 dhcp configuration#033[00m Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.797 262301 INFO neutron.agent.dhcp.agent [-] Starting network ce02107f-0aea-4105-bc1d-513d9d7aa59c dhcp configuration#033[00m Nov 23 05:03:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:00.799 262301 INFO neutron.agent.dhcp.agent [-] Finished network ce02107f-0aea-4105-bc1d-513d9d7aa59c dhcp configuration#033[00m Nov 23 05:03:00 localhost nova_compute[280939]: 2025-11-23 10:03:00.886 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:00 localhost dnsmasq[318523]: exiting on receipt of SIGTERM Nov 23 05:03:00 localhost podman[318573]: 2025-11-23 10:03:00.973454258 +0000 UTC m=+0.075093806 container kill 524830d268523e23e08a470eff1214a0fe4c24bc83e05c28fb9a1570f6b261db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-872c7ce8-7241-43b6-8a25-cee4e338b409, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:03:00 localhost systemd[1]: libpod-524830d268523e23e08a470eff1214a0fe4c24bc83e05c28fb9a1570f6b261db.scope: Deactivated successfully. Nov 23 05:03:01 localhost podman[318585]: 2025-11-23 10:03:01.045427787 +0000 UTC m=+0.060731253 container died 524830d268523e23e08a470eff1214a0fe4c24bc83e05c28fb9a1570f6b261db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-872c7ce8-7241-43b6-8a25-cee4e338b409, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 05:03:01 localhost podman[318585]: 2025-11-23 10:03:01.090402455 +0000 UTC m=+0.105705891 container cleanup 524830d268523e23e08a470eff1214a0fe4c24bc83e05c28fb9a1570f6b261db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-872c7ce8-7241-43b6-8a25-cee4e338b409, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 05:03:01 localhost systemd[1]: libpod-conmon-524830d268523e23e08a470eff1214a0fe4c24bc83e05c28fb9a1570f6b261db.scope: Deactivated successfully. Nov 23 05:03:01 localhost podman[318592]: 2025-11-23 10:03:01.170247596 +0000 UTC m=+0.168738784 container remove 524830d268523e23e08a470eff1214a0fe4c24bc83e05c28fb9a1570f6b261db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-872c7ce8-7241-43b6-8a25-cee4e338b409, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2) Nov 23 05:03:01 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:01.265 262301 INFO neutron.agent.dhcp.agent [None req-6f3a6090-7417-4d89-8527-12ff985c0a12 - - - - - -] Finished network 872c7ce8-7241-43b6-8a25-cee4e338b409 dhcp configuration#033[00m Nov 23 05:03:01 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:01.265 262301 INFO neutron.agent.dhcp.agent [None req-3f0c2123-1237-4583-86c1-f160840f36b3 - - - - - -] Synchronizing state complete#033[00m Nov 23 05:03:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v283: 177 pgs: 177 active+clean; 145 MiB data, 817 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 60 op/s Nov 23 05:03:01 localhost systemd[1]: tmp-crun.tqgFhR.mount: Deactivated successfully. Nov 23 05:03:01 localhost systemd[1]: var-lib-containers-storage-overlay-65aed356b94c04e0ee5dc01534a551a7d7307456d1e7160f2b263b0b2cd2b49c-merged.mount: Deactivated successfully. Nov 23 05:03:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-524830d268523e23e08a470eff1214a0fe4c24bc83e05c28fb9a1570f6b261db-userdata-shm.mount: Deactivated successfully. Nov 23 05:03:01 localhost systemd[1]: run-netns-qdhcp\x2d872c7ce8\x2d7241\x2d43b6\x2d8a25\x2dcee4e338b409.mount: Deactivated successfully. Nov 23 05:03:01 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:01.560 2 INFO neutron.agent.securitygroups_rpc [None req-d75c0dda-e48d-4259-9ad6-e58217c5f7b4 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['da2a8078-9813-4e58-b587-8e4d75d37f47']#033[00m Nov 23 05:03:01 localhost nova_compute[280939]: 2025-11-23 10:03:01.621 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:02 localhost nova_compute[280939]: 2025-11-23 10:03:02.368 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:02 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:02.820 2 INFO neutron.agent.securitygroups_rpc [None req-1875a888-d52f-4402-8541-5b1512187260 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['c4aad9b2-b8cd-4803-b28f-3e773406a427']#033[00m Nov 23 05:03:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:03:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e138 do_prune osdmap full prune enabled Nov 23 05:03:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e139 e139: 6 total, 6 up, 6 in Nov 23 05:03:02 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e139: 6 total, 6 up, 6 in Nov 23 05:03:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:03:03 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3154268914' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:03:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:03:03 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3154268914' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:03:03 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:03.387 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v285: 177 pgs: 177 active+clean; 145 MiB data, 817 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 60 op/s Nov 23 05:03:03 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:03.490 2 INFO neutron.agent.securitygroups_rpc [None req-d59d1e53-d1c4-451d-80ab-ba4648fa7a20 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['cb577a71-e41d-409b-b673-d883cbdab535']#033[00m Nov 23 05:03:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e139 do_prune osdmap full prune enabled Nov 23 05:03:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e140 e140: 6 total, 6 up, 6 in Nov 23 05:03:03 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e140: 6 total, 6 up, 6 in Nov 23 05:03:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:03:04 localhost podman[318617]: 2025-11-23 10:03:04.895809793 +0000 UTC m=+0.081704210 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, config_id=edpm, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.) Nov 23 05:03:04 localhost podman[318617]: 2025-11-23 10:03:04.933392541 +0000 UTC m=+0.119286978 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64) Nov 23 05:03:04 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:03:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v287: 177 pgs: 177 active+clean; 145 MiB data, 818 MiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 3.5 KiB/s wr, 101 op/s Nov 23 05:03:05 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:05.868 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:05 localhost nova_compute[280939]: 2025-11-23 10:03:05.920 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:06 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:06.636 2 INFO neutron.agent.securitygroups_rpc [None req-3cc8ae58-6898-4e12-983b-8f9645bd7e63 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['a9134bcb-5194-43f8-ac1f-875c59af23f5', '736c2f34-1be4-42fe-9283-c00aaa4f421b', 'cb577a71-e41d-409b-b673-d883cbdab535']#033[00m Nov 23 05:03:06 localhost openstack_network_exporter[241732]: ERROR 10:03:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:03:06 localhost openstack_network_exporter[241732]: ERROR 10:03:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:03:06 localhost openstack_network_exporter[241732]: ERROR 10:03:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:03:06 localhost openstack_network_exporter[241732]: ERROR 10:03:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:03:06 localhost openstack_network_exporter[241732]: Nov 23 05:03:06 localhost openstack_network_exporter[241732]: ERROR 10:03:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:03:06 localhost openstack_network_exporter[241732]: Nov 23 05:03:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e140 do_prune osdmap full prune enabled Nov 23 05:03:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e141 e141: 6 total, 6 up, 6 in Nov 23 05:03:07 localhost nova_compute[280939]: 2025-11-23 10:03:07.418 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:07 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e141: 6 total, 6 up, 6 in Nov 23 05:03:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v289: 177 pgs: 177 active+clean; 145 MiB data, 818 MiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 3.5 KiB/s wr, 101 op/s Nov 23 05:03:07 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:07.720 2 INFO neutron.agent.securitygroups_rpc [None req-63da6581-da02-4188-8e89-eab01a812a16 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['736c2f34-1be4-42fe-9283-c00aaa4f421b', 'a9134bcb-5194-43f8-ac1f-875c59af23f5']#033[00m Nov 23 05:03:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:03:08 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:08.110 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:03:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:03:08 localhost podman[318635]: 2025-11-23 10:03:08.89454056 +0000 UTC m=+0.081128173 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0) Nov 23 05:03:08 localhost podman[318635]: 2025-11-23 10:03:08.910575905 +0000 UTC m=+0.097163528 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, config_id=edpm) Nov 23 05:03:08 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:03:08 localhost systemd[1]: tmp-crun.zfMi4w.mount: Deactivated successfully. Nov 23 05:03:09 localhost podman[318636]: 2025-11-23 10:03:09.003306264 +0000 UTC m=+0.186998757 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 05:03:09 localhost podman[318636]: 2025-11-23 10:03:09.036496087 +0000 UTC m=+0.220188600 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 05:03:09 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:03:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v290: 177 pgs: 177 active+clean; 145 MiB data, 818 MiB used, 41 GiB / 42 GiB avail; 118 KiB/s rd, 5.9 KiB/s wr, 155 op/s Nov 23 05:03:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:09.745 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:03:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:09.745 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:03:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:09.745 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:03:10 localhost nova_compute[280939]: 2025-11-23 10:03:10.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:03:10 localhost nova_compute[280939]: 2025-11-23 10:03:10.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:03:10 localhost nova_compute[280939]: 2025-11-23 10:03:10.975 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:11 localhost nova_compute[280939]: 2025-11-23 10:03:11.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:03:11 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:11.243 2 INFO neutron.agent.securitygroups_rpc [None req-261eb8d2-9d50-4b0d-8acd-8f9cb046e671 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m Nov 23 05:03:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v291: 177 pgs: 177 active+clean; 145 MiB data, 818 MiB used, 41 GiB / 42 GiB avail; 95 KiB/s rd, 4.7 KiB/s wr, 125 op/s Nov 23 05:03:11 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e141 do_prune osdmap full prune enabled Nov 23 05:03:11 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e142 e142: 6 total, 6 up, 6 in Nov 23 05:03:11 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e142: 6 total, 6 up, 6 in Nov 23 05:03:12 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:12.057 262301 INFO neutron.agent.linux.ip_lib [None req-933a1eb1-f9fa-417d-aa57-979737e69f15 - - - - - -] Device tapbcfecdd0-d1 cannot be used as it has no MAC address#033[00m Nov 23 05:03:12 localhost nova_compute[280939]: 2025-11-23 10:03:12.126 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:12 localhost kernel: device tapbcfecdd0-d1 entered promiscuous mode Nov 23 05:03:12 localhost NetworkManager[5966]: [1763892192.1344] manager: (tapbcfecdd0-d1): new Generic device (/org/freedesktop/NetworkManager/Devices/43) Nov 23 05:03:12 localhost ovn_controller[153771]: 2025-11-23T10:03:12Z|00255|binding|INFO|Claiming lport bcfecdd0-d130-4bf3-b41f-d1e271870c7f for this chassis. Nov 23 05:03:12 localhost ovn_controller[153771]: 2025-11-23T10:03:12Z|00256|binding|INFO|bcfecdd0-d130-4bf3-b41f-d1e271870c7f: Claiming unknown Nov 23 05:03:12 localhost nova_compute[280939]: 2025-11-23 10:03:12.136 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:12 localhost systemd-udevd[318687]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:03:12 localhost nova_compute[280939]: 2025-11-23 10:03:12.144 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:03:12 localhost nova_compute[280939]: 2025-11-23 10:03:12.145 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Nov 23 05:03:12 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:12.153 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-3791c415-ea14-4f6d-8194-9906e7c3c59b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3791c415-ea14-4f6d-8194-9906e7c3c59b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6cc558ab5ea444ca89055d39fcd5b762', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2de80a98-28a3-4797-b9f7-d34b6bb2bf95, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=bcfecdd0-d130-4bf3-b41f-d1e271870c7f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:03:12 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:12.154 159415 INFO neutron.agent.ovn.metadata.agent [-] Port bcfecdd0-d130-4bf3-b41f-d1e271870c7f in datapath 3791c415-ea14-4f6d-8194-9906e7c3c59b bound to our chassis#033[00m Nov 23 05:03:12 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:12.156 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3791c415-ea14-4f6d-8194-9906e7c3c59b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:03:12 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:12.157 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[32ff840c-d7f2-41e5-85c0-c765d48cd06b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:03:12 localhost journal[229336]: ethtool ioctl error on tapbcfecdd0-d1: No such device Nov 23 05:03:12 localhost journal[229336]: ethtool ioctl error on tapbcfecdd0-d1: No such device Nov 23 05:03:12 localhost nova_compute[280939]: 2025-11-23 10:03:12.177 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Nov 23 05:03:12 localhost ovn_controller[153771]: 2025-11-23T10:03:12Z|00257|binding|INFO|Setting lport bcfecdd0-d130-4bf3-b41f-d1e271870c7f ovn-installed in OVS Nov 23 05:03:12 localhost ovn_controller[153771]: 2025-11-23T10:03:12Z|00258|binding|INFO|Setting lport bcfecdd0-d130-4bf3-b41f-d1e271870c7f up in Southbound Nov 23 05:03:12 localhost journal[229336]: ethtool ioctl error on tapbcfecdd0-d1: No such device Nov 23 05:03:12 localhost nova_compute[280939]: 2025-11-23 10:03:12.178 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:12 localhost nova_compute[280939]: 2025-11-23 10:03:12.181 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:12 localhost journal[229336]: ethtool ioctl error on tapbcfecdd0-d1: No such device Nov 23 05:03:12 localhost journal[229336]: ethtool ioctl error on tapbcfecdd0-d1: No such device Nov 23 05:03:12 localhost journal[229336]: ethtool ioctl error on tapbcfecdd0-d1: No such device Nov 23 05:03:12 localhost journal[229336]: ethtool ioctl error on tapbcfecdd0-d1: No such device Nov 23 05:03:12 localhost journal[229336]: ethtool ioctl error on tapbcfecdd0-d1: No such device Nov 23 05:03:12 localhost nova_compute[280939]: 2025-11-23 10:03:12.216 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:12 localhost nova_compute[280939]: 2025-11-23 10:03:12.240 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:12 localhost nova_compute[280939]: 2025-11-23 10:03:12.420 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.585 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.585 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.585 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:03:12.585 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:03:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:03:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e142 do_prune osdmap full prune enabled Nov 23 05:03:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e143 e143: 6 total, 6 up, 6 in Nov 23 05:03:12 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e143: 6 total, 6 up, 6 in Nov 23 05:03:13 localhost podman[318758]: Nov 23 05:03:13 localhost podman[318758]: 2025-11-23 10:03:13.046352808 +0000 UTC m=+0.098308472 container create e351a4fce8a0cd0fb59f635940fda65625a108da937203f6b69de078e108fa42 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 23 05:03:13 localhost systemd[1]: Started libpod-conmon-e351a4fce8a0cd0fb59f635940fda65625a108da937203f6b69de078e108fa42.scope. Nov 23 05:03:13 localhost podman[318758]: 2025-11-23 10:03:13.002723683 +0000 UTC m=+0.054679397 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:03:13 localhost systemd[1]: tmp-crun.mecA9V.mount: Deactivated successfully. Nov 23 05:03:13 localhost systemd[1]: Started libcrun container. Nov 23 05:03:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4b2b0f5f69ebb9e82bd42bf53c04c34231af68f1c4b61ff64c3b88fc7c2bc1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:03:13 localhost podman[318758]: 2025-11-23 10:03:13.144216855 +0000 UTC m=+0.196172519 container init e351a4fce8a0cd0fb59f635940fda65625a108da937203f6b69de078e108fa42 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 23 05:03:13 localhost podman[318758]: 2025-11-23 10:03:13.15540698 +0000 UTC m=+0.207362644 container start e351a4fce8a0cd0fb59f635940fda65625a108da937203f6b69de078e108fa42 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118) Nov 23 05:03:13 localhost dnsmasq[318777]: started, version 2.85 cachesize 150 Nov 23 05:03:13 localhost dnsmasq[318777]: DNS service limited to local subnets Nov 23 05:03:13 localhost dnsmasq[318777]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:03:13 localhost dnsmasq[318777]: warning: no upstream servers configured Nov 23 05:03:13 localhost dnsmasq-dhcp[318777]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 23 05:03:13 localhost nova_compute[280939]: 2025-11-23 10:03:13.165 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:03:13 localhost nova_compute[280939]: 2025-11-23 10:03:13.165 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 05:03:13 localhost dnsmasq[318777]: read /var/lib/neutron/dhcp/3791c415-ea14-4f6d-8194-9906e7c3c59b/addn_hosts - 0 addresses Nov 23 05:03:13 localhost dnsmasq-dhcp[318777]: read /var/lib/neutron/dhcp/3791c415-ea14-4f6d-8194-9906e7c3c59b/host Nov 23 05:03:13 localhost dnsmasq-dhcp[318777]: read /var/lib/neutron/dhcp/3791c415-ea14-4f6d-8194-9906e7c3c59b/opts Nov 23 05:03:13 localhost nova_compute[280939]: 2025-11-23 10:03:13.166 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 05:03:13 localhost nova_compute[280939]: 2025-11-23 10:03:13.181 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 05:03:13 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:13.406 262301 INFO neutron.agent.dhcp.agent [None req-44599ec4-53d7-4e1e-9a5e-9caeec9d7f86 - - - - - -] DHCP configuration for ports {'5e6a7360-a279-4af1-ba70-a75b28e78072'} is completed#033[00m Nov 23 05:03:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v294: 177 pgs: 177 active+clean; 145 MiB data, 818 MiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 2.8 KiB/s wr, 66 op/s Nov 23 05:03:13 localhost dnsmasq[318777]: exiting on receipt of SIGTERM Nov 23 05:03:13 localhost podman[318796]: 2025-11-23 10:03:13.608307075 +0000 UTC m=+0.061855828 container kill e351a4fce8a0cd0fb59f635940fda65625a108da937203f6b69de078e108fa42 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 23 05:03:13 localhost systemd[1]: libpod-e351a4fce8a0cd0fb59f635940fda65625a108da937203f6b69de078e108fa42.scope: Deactivated successfully. Nov 23 05:03:13 localhost podman[318810]: 2025-11-23 10:03:13.678956044 +0000 UTC m=+0.055295697 container died e351a4fce8a0cd0fb59f635940fda65625a108da937203f6b69de078e108fa42 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS) Nov 23 05:03:13 localhost podman[318810]: 2025-11-23 10:03:13.709475055 +0000 UTC m=+0.085814678 container cleanup e351a4fce8a0cd0fb59f635940fda65625a108da937203f6b69de078e108fa42 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:03:13 localhost systemd[1]: libpod-conmon-e351a4fce8a0cd0fb59f635940fda65625a108da937203f6b69de078e108fa42.scope: Deactivated successfully. Nov 23 05:03:13 localhost podman[318811]: 2025-11-23 10:03:13.765203673 +0000 UTC m=+0.135309813 container remove e351a4fce8a0cd0fb59f635940fda65625a108da937203f6b69de078e108fa42 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:03:14 localhost systemd[1]: var-lib-containers-storage-overlay-fb4b2b0f5f69ebb9e82bd42bf53c04c34231af68f1c4b61ff64c3b88fc7c2bc1-merged.mount: Deactivated successfully. Nov 23 05:03:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e351a4fce8a0cd0fb59f635940fda65625a108da937203f6b69de078e108fa42-userdata-shm.mount: Deactivated successfully. Nov 23 05:03:15 localhost nova_compute[280939]: 2025-11-23 10:03:15.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:03:15 localhost nova_compute[280939]: 2025-11-23 10:03:15.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:03:15 localhost podman[318887]: Nov 23 05:03:15 localhost podman[318887]: 2025-11-23 10:03:15.287567633 +0000 UTC m=+0.085473106 container create 9283b210ce4f818b1807fe592a88973442fe5d620377c42e26781e5bdbe37a18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0) Nov 23 05:03:15 localhost systemd[1]: Started libpod-conmon-9283b210ce4f818b1807fe592a88973442fe5d620377c42e26781e5bdbe37a18.scope. Nov 23 05:03:15 localhost systemd[1]: Started libcrun container. Nov 23 05:03:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4f22bed24036c7e514196d80dfa96fc5807bffa371b61c03b28691c13d521fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:03:15 localhost podman[318887]: 2025-11-23 10:03:15.245739104 +0000 UTC m=+0.043644587 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:03:15 localhost podman[318887]: 2025-11-23 10:03:15.351190876 +0000 UTC m=+0.149096349 container init 9283b210ce4f818b1807fe592a88973442fe5d620377c42e26781e5bdbe37a18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:03:15 localhost podman[318887]: 2025-11-23 10:03:15.360191653 +0000 UTC m=+0.158097136 container start 9283b210ce4f818b1807fe592a88973442fe5d620377c42e26781e5bdbe37a18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:03:15 localhost dnsmasq[318905]: started, version 2.85 cachesize 150 Nov 23 05:03:15 localhost dnsmasq[318905]: DNS service limited to local subnets Nov 23 05:03:15 localhost dnsmasq[318905]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:03:15 localhost dnsmasq[318905]: warning: no upstream servers configured Nov 23 05:03:15 localhost dnsmasq-dhcp[318905]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:03:15 localhost dnsmasq-dhcp[318905]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 23 05:03:15 localhost dnsmasq[318905]: read /var/lib/neutron/dhcp/3791c415-ea14-4f6d-8194-9906e7c3c59b/addn_hosts - 0 addresses Nov 23 05:03:15 localhost dnsmasq-dhcp[318905]: read /var/lib/neutron/dhcp/3791c415-ea14-4f6d-8194-9906e7c3c59b/host Nov 23 05:03:15 localhost dnsmasq-dhcp[318905]: read /var/lib/neutron/dhcp/3791c415-ea14-4f6d-8194-9906e7c3c59b/opts Nov 23 05:03:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v295: 177 pgs: 177 active+clean; 145 MiB data, 818 MiB used, 41 GiB / 42 GiB avail; 104 KiB/s rd, 5.4 KiB/s wr, 137 op/s Nov 23 05:03:15 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:15.788 262301 INFO neutron.agent.dhcp.agent [None req-f6c7dec8-b3d7-402c-a93e-85fc99fc0c6a - - - - - -] DHCP configuration for ports {'bcfecdd0-d130-4bf3-b41f-d1e271870c7f', '5e6a7360-a279-4af1-ba70-a75b28e78072'} is completed#033[00m Nov 23 05:03:16 localhost nova_compute[280939]: 2025-11-23 10:03:16.016 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:16 localhost nova_compute[280939]: 2025-11-23 10:03:16.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:03:16 localhost nova_compute[280939]: 2025-11-23 10:03:16.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 05:03:17 localhost podman[239764]: time="2025-11-23T10:03:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:03:17 localhost podman[239764]: @ - - [23/Nov/2025:10:03:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158237 "" "Go-http-client/1.1" Nov 23 05:03:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e143 do_prune osdmap full prune enabled Nov 23 05:03:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e144 e144: 6 total, 6 up, 6 in Nov 23 05:03:17 localhost podman[239764]: @ - - [23/Nov/2025:10:03:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19676 "" "Go-http-client/1.1" Nov 23 05:03:17 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e144: 6 total, 6 up, 6 in Nov 23 05:03:17 localhost nova_compute[280939]: 2025-11-23 10:03:17.422 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v297: 177 pgs: 177 active+clean; 145 MiB data, 818 MiB used, 41 GiB / 42 GiB avail; 88 KiB/s rd, 4.3 KiB/s wr, 116 op/s Nov 23 05:03:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:03:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e144 do_prune osdmap full prune enabled Nov 23 05:03:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e145 e145: 6 total, 6 up, 6 in Nov 23 05:03:17 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e145: 6 total, 6 up, 6 in Nov 23 05:03:18 localhost nova_compute[280939]: 2025-11-23 10:03:18.128 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:03:18 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:18.901 262301 INFO neutron.agent.linux.ip_lib [None req-c043ca93-2726-42d1-a838-a6512bbd5abe - - - - - -] Device tap4051153e-87 cannot be used as it has no MAC address#033[00m Nov 23 05:03:18 localhost nova_compute[280939]: 2025-11-23 10:03:18.922 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:18 localhost kernel: device tap4051153e-87 entered promiscuous mode Nov 23 05:03:18 localhost NetworkManager[5966]: [1763892198.9316] manager: (tap4051153e-87): new Generic device (/org/freedesktop/NetworkManager/Devices/44) Nov 23 05:03:18 localhost ovn_controller[153771]: 2025-11-23T10:03:18Z|00259|binding|INFO|Claiming lport 4051153e-8797-456c-8536-d6505ef4a8da for this chassis. Nov 23 05:03:18 localhost ovn_controller[153771]: 2025-11-23T10:03:18Z|00260|binding|INFO|4051153e-8797-456c-8536-d6505ef4a8da: Claiming unknown Nov 23 05:03:18 localhost systemd-udevd[318916]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:03:18 localhost nova_compute[280939]: 2025-11-23 10:03:18.935 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:18 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:18.942 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-c74d0c76-0cc1-4147-a4c3-201b1d65e72d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c74d0c76-0cc1-4147-a4c3-201b1d65e72d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd27ceae55c44d478998092e7554fd8a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4499f2df-9df0-43ca-bfd2-2ca358e901fa, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4051153e-8797-456c-8536-d6505ef4a8da) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:03:18 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:18.944 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 4051153e-8797-456c-8536-d6505ef4a8da in datapath c74d0c76-0cc1-4147-a4c3-201b1d65e72d bound to our chassis#033[00m Nov 23 05:03:18 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:18.946 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c74d0c76-0cc1-4147-a4c3-201b1d65e72d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:03:18 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:18.947 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[88fb3449-7ca2-4f86-95a3-bb98e042ae5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:03:18 localhost journal[229336]: ethtool ioctl error on tap4051153e-87: No such device Nov 23 05:03:18 localhost ovn_controller[153771]: 2025-11-23T10:03:18Z|00261|binding|INFO|Setting lport 4051153e-8797-456c-8536-d6505ef4a8da ovn-installed in OVS Nov 23 05:03:18 localhost ovn_controller[153771]: 2025-11-23T10:03:18Z|00262|binding|INFO|Setting lport 4051153e-8797-456c-8536-d6505ef4a8da up in Southbound Nov 23 05:03:18 localhost nova_compute[280939]: 2025-11-23 10:03:18.969 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:18 localhost journal[229336]: ethtool ioctl error on tap4051153e-87: No such device Nov 23 05:03:18 localhost journal[229336]: ethtool ioctl error on tap4051153e-87: No such device Nov 23 05:03:18 localhost journal[229336]: ethtool ioctl error on tap4051153e-87: No such device Nov 23 05:03:18 localhost journal[229336]: ethtool ioctl error on tap4051153e-87: No such device Nov 23 05:03:18 localhost journal[229336]: ethtool ioctl error on tap4051153e-87: No such device Nov 23 05:03:18 localhost journal[229336]: ethtool ioctl error on tap4051153e-87: No such device Nov 23 05:03:19 localhost journal[229336]: ethtool ioctl error on tap4051153e-87: No such device Nov 23 05:03:19 localhost nova_compute[280939]: 2025-11-23 10:03:19.007 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:19 localhost nova_compute[280939]: 2025-11-23 10:03:19.034 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:19 localhost nova_compute[280939]: 2025-11-23 10:03:19.131 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:03:19 localhost nova_compute[280939]: 2025-11-23 10:03:19.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:03:19 localhost nova_compute[280939]: 2025-11-23 10:03:19.156 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:03:19 localhost nova_compute[280939]: 2025-11-23 10:03:19.157 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:03:19 localhost nova_compute[280939]: 2025-11-23 10:03:19.157 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:03:19 localhost nova_compute[280939]: 2025-11-23 10:03:19.157 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 05:03:19 localhost nova_compute[280939]: 2025-11-23 10:03:19.158 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:03:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v299: 177 pgs: 177 active+clean; 145 MiB data, 818 MiB used, 41 GiB / 42 GiB avail; 150 KiB/s rd, 6.0 KiB/s wr, 197 op/s Nov 23 05:03:19 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:03:19 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/573425647' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:03:19 localhost nova_compute[280939]: 2025-11-23 10:03:19.628 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:03:19 localhost podman[319009]: Nov 23 05:03:19 localhost nova_compute[280939]: 2025-11-23 10:03:19.863 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 05:03:19 localhost nova_compute[280939]: 2025-11-23 10:03:19.865 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11558MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 05:03:19 localhost nova_compute[280939]: 2025-11-23 10:03:19.866 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:03:19 localhost nova_compute[280939]: 2025-11-23 10:03:19.866 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:03:19 localhost podman[319009]: 2025-11-23 10:03:19.881211345 +0000 UTC m=+0.105915807 container create 5770ee6f59985efa8a17e225125dd7154e38be7be5d522951adf4e0bac58ce2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c74d0c76-0cc1-4147-a4c3-201b1d65e72d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 23 05:03:19 localhost podman[319009]: 2025-11-23 10:03:19.830078728 +0000 UTC m=+0.054783220 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:03:19 localhost systemd[1]: Started libpod-conmon-5770ee6f59985efa8a17e225125dd7154e38be7be5d522951adf4e0bac58ce2f.scope. Nov 23 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:03:19 localhost systemd[1]: tmp-crun.oYmuTs.mount: Deactivated successfully. Nov 23 05:03:19 localhost systemd[1]: Started libcrun container. Nov 23 05:03:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903eeac753a067b212dd32b902599e7be1d645a4ff3d4b3eb59b3f8d38a2cf11/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:03:19 localhost podman[319009]: 2025-11-23 10:03:19.975779041 +0000 UTC m=+0.200483493 container init 5770ee6f59985efa8a17e225125dd7154e38be7be5d522951adf4e0bac58ce2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c74d0c76-0cc1-4147-a4c3-201b1d65e72d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:03:19 localhost podman[319009]: 2025-11-23 10:03:19.983073656 +0000 UTC m=+0.207778118 container start 5770ee6f59985efa8a17e225125dd7154e38be7be5d522951adf4e0bac58ce2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c74d0c76-0cc1-4147-a4c3-201b1d65e72d, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 05:03:19 localhost dnsmasq[319036]: started, version 2.85 cachesize 150 Nov 23 05:03:19 localhost dnsmasq[319036]: DNS service limited to local subnets Nov 23 05:03:19 localhost dnsmasq[319036]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:03:19 localhost dnsmasq[319036]: warning: no upstream servers configured Nov 23 05:03:19 localhost dnsmasq-dhcp[319036]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 23 05:03:19 localhost dnsmasq[319036]: read /var/lib/neutron/dhcp/c74d0c76-0cc1-4147-a4c3-201b1d65e72d/addn_hosts - 0 addresses Nov 23 05:03:19 localhost dnsmasq-dhcp[319036]: read /var/lib/neutron/dhcp/c74d0c76-0cc1-4147-a4c3-201b1d65e72d/host Nov 23 05:03:19 localhost dnsmasq-dhcp[319036]: read /var/lib/neutron/dhcp/c74d0c76-0cc1-4147-a4c3-201b1d65e72d/opts Nov 23 05:03:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:03:20 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1101695890' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:03:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:03:20 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1101695890' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:03:20 localhost podman[319025]: 2025-11-23 10:03:20.050018499 +0000 UTC m=+0.099590121 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2) Nov 23 05:03:20 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:20.045 262301 INFO neutron.agent.dhcp.agent [None req-c043ca93-2726-42d1-a838-a6512bbd5abe - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:19Z, description=, device_id=546a2558-70b9-4cc7-8d44-6c3be7bdf264, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d95419e4-8ba2-4edd-9162-3b7ce501dccf, ip_allocation=immediate, mac_address=fa:16:3e:c0:c7:74, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:15Z, description=, dns_domain=, id=c74d0c76-0cc1-4147-a4c3-201b1d65e72d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1621940815, port_security_enabled=True, project_id=cd27ceae55c44d478998092e7554fd8a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9531, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2204, status=ACTIVE, subnets=['bd7c0aac-2229-4c5f-884c-50f22e984bf4'], tags=[], tenant_id=cd27ceae55c44d478998092e7554fd8a, updated_at=2025-11-23T10:03:17Z, vlan_transparent=None, network_id=c74d0c76-0cc1-4147-a4c3-201b1d65e72d, port_security_enabled=False, project_id=cd27ceae55c44d478998092e7554fd8a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2232, status=DOWN, tags=[], tenant_id=cd27ceae55c44d478998092e7554fd8a, updated_at=2025-11-23T10:03:19Z on network c74d0c76-0cc1-4147-a4c3-201b1d65e72d#033[00m Nov 23 05:03:20 localhost podman[319025]: 2025-11-23 10:03:20.086626779 +0000 UTC m=+0.136198351 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Nov 23 05:03:20 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:03:20 localhost nova_compute[280939]: 2025-11-23 10:03:20.118 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 05:03:20 localhost nova_compute[280939]: 2025-11-23 10:03:20.118 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 05:03:20 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:20.179 262301 INFO neutron.agent.dhcp.agent [None req-0c214933-f1af-406b-ba69-70188b400e5e - - - - - -] DHCP configuration for ports {'4034e5f8-f7eb-4211-a5f6-1e48d08d1239'} is completed#033[00m Nov 23 05:03:20 localhost nova_compute[280939]: 2025-11-23 10:03:20.236 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:03:20 localhost dnsmasq[319036]: read /var/lib/neutron/dhcp/c74d0c76-0cc1-4147-a4c3-201b1d65e72d/addn_hosts - 1 addresses Nov 23 05:03:20 localhost dnsmasq-dhcp[319036]: read /var/lib/neutron/dhcp/c74d0c76-0cc1-4147-a4c3-201b1d65e72d/host Nov 23 05:03:20 localhost dnsmasq-dhcp[319036]: read /var/lib/neutron/dhcp/c74d0c76-0cc1-4147-a4c3-201b1d65e72d/opts Nov 23 05:03:20 localhost podman[319066]: 2025-11-23 10:03:20.244084974 +0000 UTC m=+0.062301853 container kill 5770ee6f59985efa8a17e225125dd7154e38be7be5d522951adf4e0bac58ce2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c74d0c76-0cc1-4147-a4c3-201b1d65e72d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:03:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:03:20 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/534642139' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:03:20 localhost nova_compute[280939]: 2025-11-23 10:03:20.688 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:03:20 localhost nova_compute[280939]: 2025-11-23 10:03:20.696 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 05:03:20 localhost nova_compute[280939]: 2025-11-23 10:03:20.708 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 05:03:20 localhost nova_compute[280939]: 2025-11-23 10:03:20.711 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 05:03:20 localhost nova_compute[280939]: 2025-11-23 10:03:20.711 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:03:20 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:20.720 262301 INFO neutron.agent.dhcp.agent [None req-4d8e04ca-7401-44fa-918d-fdcca70e1103 - - - - - -] DHCP configuration for ports {'d95419e4-8ba2-4edd-9162-3b7ce501dccf'} is completed#033[00m Nov 23 05:03:21 localhost nova_compute[280939]: 2025-11-23 10:03:21.054 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:21 localhost nova_compute[280939]: 2025-11-23 10:03:21.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:03:21 localhost nova_compute[280939]: 2025-11-23 10:03:21.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Nov 23 05:03:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v300: 177 pgs: 177 active+clean; 145 MiB data, 818 MiB used, 41 GiB / 42 GiB avail; 121 KiB/s rd, 4.9 KiB/s wr, 159 op/s Nov 23 05:03:22 localhost nova_compute[280939]: 2025-11-23 10:03:22.157 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:03:22 localhost nova_compute[280939]: 2025-11-23 10:03:22.467 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:22 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:22.549 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:19Z, description=, device_id=546a2558-70b9-4cc7-8d44-6c3be7bdf264, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d95419e4-8ba2-4edd-9162-3b7ce501dccf, ip_allocation=immediate, mac_address=fa:16:3e:c0:c7:74, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:15Z, description=, dns_domain=, id=c74d0c76-0cc1-4147-a4c3-201b1d65e72d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1621940815, port_security_enabled=True, project_id=cd27ceae55c44d478998092e7554fd8a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9531, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2204, status=ACTIVE, subnets=['bd7c0aac-2229-4c5f-884c-50f22e984bf4'], tags=[], tenant_id=cd27ceae55c44d478998092e7554fd8a, updated_at=2025-11-23T10:03:17Z, vlan_transparent=None, network_id=c74d0c76-0cc1-4147-a4c3-201b1d65e72d, port_security_enabled=False, project_id=cd27ceae55c44d478998092e7554fd8a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2232, status=DOWN, tags=[], tenant_id=cd27ceae55c44d478998092e7554fd8a, updated_at=2025-11-23T10:03:19Z on network c74d0c76-0cc1-4147-a4c3-201b1d65e72d#033[00m Nov 23 05:03:22 localhost dnsmasq[319036]: read /var/lib/neutron/dhcp/c74d0c76-0cc1-4147-a4c3-201b1d65e72d/addn_hosts - 1 addresses Nov 23 05:03:22 localhost dnsmasq-dhcp[319036]: read /var/lib/neutron/dhcp/c74d0c76-0cc1-4147-a4c3-201b1d65e72d/host Nov 23 05:03:22 localhost podman[319125]: 2025-11-23 10:03:22.740583171 +0000 UTC m=+0.066609555 container kill 5770ee6f59985efa8a17e225125dd7154e38be7be5d522951adf4e0bac58ce2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c74d0c76-0cc1-4147-a4c3-201b1d65e72d, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Nov 23 05:03:22 localhost dnsmasq-dhcp[319036]: read /var/lib/neutron/dhcp/c74d0c76-0cc1-4147-a4c3-201b1d65e72d/opts Nov 23 05:03:22 localhost nova_compute[280939]: 2025-11-23 10:03:22.818 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:03:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:03:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e145 do_prune osdmap full prune enabled Nov 23 05:03:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e146 e146: 6 total, 6 up, 6 in Nov 23 05:03:22 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:22.971 262301 INFO neutron.agent.dhcp.agent [None req-c5a527b6-e1b9-4f4e-b138-cf0c3ba0324f - - - - - -] DHCP configuration for ports {'d95419e4-8ba2-4edd-9162-3b7ce501dccf'} is completed#033[00m Nov 23 05:03:22 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e146: 6 total, 6 up, 6 in Nov 23 05:03:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_10:03:23 Nov 23 05:03:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 05:03:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 05:03:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['manila_metadata', 'images', 'volumes', 'backups', '.mgr', 'vms', 'manila_data'] Nov 23 05:03:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 05:03:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:03:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:03:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:03:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:03:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:03:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:03:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v302: 177 pgs: 177 active+clean; 145 MiB data, 818 MiB used, 41 GiB / 42 GiB avail; 70 KiB/s rd, 2.1 KiB/s wr, 91 op/s Nov 23 05:03:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 05:03:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:03:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 05:03:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:03:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Nov 23 05:03:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:03:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00010887045179601712 quantized to 32 (current 32) Nov 23 05:03:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:03:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 23 05:03:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:03:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 23 05:03:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:03:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 05:03:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:03:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Nov 23 05:03:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 05:03:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 05:03:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:03:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:03:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:03:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:03:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:03:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:03:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:03:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:03:25 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:25.017 2 INFO neutron.agent.securitygroups_rpc [None req-82f52ebc-5307-4b75-9da0-dca3e27d739d da47bb8e9ce044b7a6c60aeaa303445e 1ed74022d4944d5c8276b163cae1a73a - - default default] Security group member updated ['0c7393ad-63e1-4b57-bb16-ccf2466506e9']#033[00m Nov 23 05:03:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v303: 177 pgs: 177 active+clean; 163 MiB data, 818 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 403 KiB/s wr, 133 op/s Nov 23 05:03:26 localhost nova_compute[280939]: 2025-11-23 10:03:26.104 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v304: 177 pgs: 177 active+clean; 163 MiB data, 818 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 340 KiB/s wr, 112 op/s Nov 23 05:03:27 localhost nova_compute[280939]: 2025-11-23 10:03:27.518 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e146 do_prune osdmap full prune enabled Nov 23 05:03:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e147 e147: 6 total, 6 up, 6 in Nov 23 05:03:27 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e147: 6 total, 6 up, 6 in Nov 23 05:03:27 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:27.690 2 INFO neutron.agent.securitygroups_rpc [None req-746b6928-7309-4035-bfd7-9d9f95de5728 da47bb8e9ce044b7a6c60aeaa303445e 1ed74022d4944d5c8276b163cae1a73a - - default default] Security group member updated ['0c7393ad-63e1-4b57-bb16-ccf2466506e9']#033[00m Nov 23 05:03:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:03:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:03:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:03:27 localhost podman[319146]: 2025-11-23 10:03:27.914166965 +0000 UTC m=+0.097816547 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:03:27 localhost podman[319146]: 2025-11-23 10:03:27.95323092 +0000 UTC m=+0.136880572 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3) Nov 23 05:03:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:03:27 localhost podman[319147]: 2025-11-23 10:03:27.968718407 +0000 UTC m=+0.147235780 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:03:27 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:03:28 localhost podman[319153]: 2025-11-23 10:03:28.030131221 +0000 UTC m=+0.201587567 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 23 05:03:28 localhost podman[319147]: 2025-11-23 10:03:28.061053204 +0000 UTC m=+0.239570587 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 05:03:28 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:03:28 localhost podman[319153]: 2025-11-23 10:03:28.102493292 +0000 UTC m=+0.273949608 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true) Nov 23 05:03:28 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:03:28 localhost sshd[319212]: main: sshd: ssh-rsa algorithm is disabled Nov 23 05:03:28 localhost dnsmasq[319036]: read /var/lib/neutron/dhcp/c74d0c76-0cc1-4147-a4c3-201b1d65e72d/addn_hosts - 0 addresses Nov 23 05:03:28 localhost dnsmasq-dhcp[319036]: read /var/lib/neutron/dhcp/c74d0c76-0cc1-4147-a4c3-201b1d65e72d/host Nov 23 05:03:28 localhost podman[319231]: 2025-11-23 10:03:28.696427356 +0000 UTC m=+0.062938553 container kill 5770ee6f59985efa8a17e225125dd7154e38be7be5d522951adf4e0bac58ce2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c74d0c76-0cc1-4147-a4c3-201b1d65e72d, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:03:28 localhost dnsmasq-dhcp[319036]: read /var/lib/neutron/dhcp/c74d0c76-0cc1-4147-a4c3-201b1d65e72d/opts Nov 23 05:03:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e147 do_prune osdmap full prune enabled Nov 23 05:03:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e148 e148: 6 total, 6 up, 6 in Nov 23 05:03:28 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e148: 6 total, 6 up, 6 in Nov 23 05:03:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v307: 177 pgs: 177 active+clean; 238 MiB data, 981 MiB used, 41 GiB / 42 GiB avail; 6.5 MiB/s rd, 6.6 MiB/s wr, 142 op/s Nov 23 05:03:29 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:29.723 262301 INFO neutron.agent.linux.ip_lib [None req-3b598cfe-f7c8-4955-a33a-d436dd29b926 - - - - - -] Device tapb54f04b5-c0 cannot be used as it has no MAC address#033[00m Nov 23 05:03:29 localhost nova_compute[280939]: 2025-11-23 10:03:29.746 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:29 localhost kernel: device tapb54f04b5-c0 entered promiscuous mode Nov 23 05:03:29 localhost NetworkManager[5966]: [1763892209.7535] manager: (tapb54f04b5-c0): new Generic device (/org/freedesktop/NetworkManager/Devices/45) Nov 23 05:03:29 localhost systemd-udevd[319262]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:03:29 localhost nova_compute[280939]: 2025-11-23 10:03:29.756 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:29 localhost ovn_controller[153771]: 2025-11-23T10:03:29Z|00263|binding|INFO|Claiming lport b54f04b5-c09a-4bc6-b5cd-b239662f2c79 for this chassis. Nov 23 05:03:29 localhost ovn_controller[153771]: 2025-11-23T10:03:29Z|00264|binding|INFO|b54f04b5-c09a-4bc6-b5cd-b239662f2c79: Claiming unknown Nov 23 05:03:29 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:29.769 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-53373f39-43fa-4ec3-9d41-940ffd04ef81', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53373f39-43fa-4ec3-9d41-940ffd04ef81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fc3fa728d6f4403acd9944d81eaeb18', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee133ce0-4c65-44e2-b472-fe125a148b66, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b54f04b5-c09a-4bc6-b5cd-b239662f2c79) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:03:29 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:29.771 159415 INFO neutron.agent.ovn.metadata.agent [-] Port b54f04b5-c09a-4bc6-b5cd-b239662f2c79 in datapath 53373f39-43fa-4ec3-9d41-940ffd04ef81 bound to our chassis#033[00m Nov 23 05:03:29 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:29.774 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port 153a17ef-9d26-4fd5-9dfb-7bc1de769f64 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 05:03:29 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:29.774 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 53373f39-43fa-4ec3-9d41-940ffd04ef81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:03:29 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:29.775 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[3b10dd5d-5937-4240-804b-4d4c85b4ccf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:03:29 localhost ovn_controller[153771]: 2025-11-23T10:03:29Z|00265|binding|INFO|Setting lport b54f04b5-c09a-4bc6-b5cd-b239662f2c79 ovn-installed in OVS Nov 23 05:03:29 localhost ovn_controller[153771]: 2025-11-23T10:03:29Z|00266|binding|INFO|Setting lport b54f04b5-c09a-4bc6-b5cd-b239662f2c79 up in Southbound Nov 23 05:03:29 localhost nova_compute[280939]: 2025-11-23 10:03:29.796 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:29 localhost nova_compute[280939]: 2025-11-23 10:03:29.830 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:29 localhost nova_compute[280939]: 2025-11-23 10:03:29.860 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e148 do_prune osdmap full prune enabled Nov 23 05:03:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e149 e149: 6 total, 6 up, 6 in Nov 23 05:03:30 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e149: 6 total, 6 up, 6 in Nov 23 05:03:30 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:30.180 2 INFO neutron.agent.securitygroups_rpc [None req-7bdd1f21-8588-4120-93d3-3c530b607701 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m Nov 23 05:03:30 localhost podman[319317]: Nov 23 05:03:30 localhost podman[319317]: 2025-11-23 10:03:30.699624972 +0000 UTC m=+0.088791559 container create afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 23 05:03:30 localhost systemd[1]: Started libpod-conmon-afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f.scope. Nov 23 05:03:30 localhost podman[319317]: 2025-11-23 10:03:30.655669877 +0000 UTC m=+0.044836484 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:03:30 localhost systemd[1]: Started libcrun container. Nov 23 05:03:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94ff772cbccff60aea1f356dd884120f556223dbabdfe4f03b6976fd2d85ad64/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:03:30 localhost podman[319317]: 2025-11-23 10:03:30.772450318 +0000 UTC m=+0.161616885 container init afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:03:30 localhost podman[319317]: 2025-11-23 10:03:30.781657991 +0000 UTC m=+0.170824578 container start afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 23 05:03:30 localhost dnsmasq[319336]: started, version 2.85 cachesize 150 Nov 23 05:03:30 localhost dnsmasq[319336]: DNS service limited to local subnets Nov 23 05:03:30 localhost dnsmasq[319336]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:03:30 localhost dnsmasq[319336]: warning: no upstream servers configured Nov 23 05:03:30 localhost dnsmasq-dhcp[319336]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:03:30 localhost dnsmasq[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/addn_hosts - 0 addresses Nov 23 05:03:30 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/host Nov 23 05:03:30 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/opts Nov 23 05:03:30 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:30.847 262301 INFO neutron.agent.dhcp.agent [None req-3f0c2123-1237-4583-86c1-f160840f36b3 - - - - - -] Synchronizing state#033[00m Nov 23 05:03:30 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:30.993 262301 INFO neutron.agent.dhcp.agent [None req-cd9fb465-81bd-4d07-84c9-9aeb54d26c97 - - - - - -] DHCP configuration for ports {'88e6bd17-4ceb-4058-a955-cf55df05fe9c'} is completed#033[00m Nov 23 05:03:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e149 do_prune osdmap full prune enabled Nov 23 05:03:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e150 e150: 6 total, 6 up, 6 in Nov 23 05:03:31 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e150: 6 total, 6 up, 6 in Nov 23 05:03:31 localhost nova_compute[280939]: 2025-11-23 10:03:31.107 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:31 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cafe6b35-cbcb-405c-8bdc-6e0c35312d23", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:03:31 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cafe6b35-cbcb-405c-8bdc-6e0c35312d23, vol_name:cephfs) < "" Nov 23 05:03:31 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:03:31.115+0000 7f9cc7b08640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:03:31 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:03:31 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:03:31.115+0000 7f9cc7b08640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:03:31 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:03:31 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:03:31.115+0000 7f9cc7b08640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:03:31 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:03:31 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:03:31.115+0000 7f9cc7b08640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:03:31 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:03:31 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:03:31.115+0000 7f9cc7b08640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:03:31 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:03:31 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:31.129 262301 INFO neutron.agent.dhcp.agent [None req-b3735b65-9a25-4138-b9f7-4fb19bc1cdf4 - - - - - -] All active networks have been fetched through RPC.#033[00m Nov 23 05:03:31 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:31.130 262301 INFO neutron.agent.dhcp.agent [-] Starting network 881c97fa-7871-450c-b8fd-eb49d33b3df5 dhcp configuration#033[00m Nov 23 05:03:31 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:31.131 262301 INFO neutron.agent.dhcp.agent [-] Finished network 881c97fa-7871-450c-b8fd-eb49d33b3df5 dhcp configuration#033[00m Nov 23 05:03:31 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:31.131 262301 INFO neutron.agent.dhcp.agent [None req-b3735b65-9a25-4138-b9f7-4fb19bc1cdf4 - - - - - -] Synchronizing state complete#033[00m Nov 23 05:03:31 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:31.132 262301 INFO neutron.agent.dhcp.agent [None req-4aa92bda-61c2-4f1d-b724-887eed3766bd - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:29Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e9d0b7a0-c7b9-4609-9690-8966f250c6b7, ip_allocation=immediate, mac_address=fa:16:3e:32:0a:62, name=tempest-AllowedAddressPairTestJSON-988022197, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:25Z, description=, dns_domain=, id=53373f39-43fa-4ec3-9d41-940ffd04ef81, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-1660971929, port_security_enabled=True, project_id=6fc3fa728d6f4403acd9944d81eaeb18, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1995, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2266, status=ACTIVE, subnets=['ccd68745-58fd-47f5-b61d-a4973f54b25f'], tags=[], tenant_id=6fc3fa728d6f4403acd9944d81eaeb18, updated_at=2025-11-23T10:03:27Z, vlan_transparent=None, network_id=53373f39-43fa-4ec3-9d41-940ffd04ef81, port_security_enabled=True, project_id=6fc3fa728d6f4403acd9944d81eaeb18, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2'], standard_attr_id=2282, status=DOWN, tags=[], tenant_id=6fc3fa728d6f4403acd9944d81eaeb18, updated_at=2025-11-23T10:03:29Z on network 53373f39-43fa-4ec3-9d41-940ffd04ef81#033[00m Nov 23 05:03:31 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:31.140 262301 INFO neutron.agent.dhcp.agent [None req-4a130846-cfe4-4f54-94e8-a26559c198c1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:31 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:31.141 262301 INFO neutron.agent.dhcp.agent [None req-4a130846-cfe4-4f54-94e8-a26559c198c1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:31 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:31.144 2 INFO neutron.agent.securitygroups_rpc [None req-04f8e66d-7c6f-40a2-b025-3f4a41cc4961 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m Nov 23 05:03:31 localhost nova_compute[280939]: 2025-11-23 10:03:31.168 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:31 localhost ovn_controller[153771]: 2025-11-23T10:03:31Z|00267|binding|INFO|Releasing lport 4051153e-8797-456c-8536-d6505ef4a8da from this chassis (sb_readonly=0) Nov 23 05:03:31 localhost kernel: device tap4051153e-87 left promiscuous mode Nov 23 05:03:31 localhost ovn_controller[153771]: 2025-11-23T10:03:31Z|00268|binding|INFO|Setting lport 4051153e-8797-456c-8536-d6505ef4a8da down in Southbound Nov 23 05:03:31 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:31.178 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-c74d0c76-0cc1-4147-a4c3-201b1d65e72d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c74d0c76-0cc1-4147-a4c3-201b1d65e72d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd27ceae55c44d478998092e7554fd8a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4499f2df-9df0-43ca-bfd2-2ca358e901fa, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4051153e-8797-456c-8536-d6505ef4a8da) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:03:31 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:31.180 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 4051153e-8797-456c-8536-d6505ef4a8da in datapath c74d0c76-0cc1-4147-a4c3-201b1d65e72d unbound from our chassis#033[00m Nov 23 05:03:31 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:31.182 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c74d0c76-0cc1-4147-a4c3-201b1d65e72d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:03:31 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:31.184 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[f1aa79c9-f1b2-4ad9-9367-1efbe5ea921a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:03:31 localhost nova_compute[280939]: 2025-11-23 10:03:31.200 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:31 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cafe6b35-cbcb-405c-8bdc-6e0c35312d23/.meta.tmp' Nov 23 05:03:31 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cafe6b35-cbcb-405c-8bdc-6e0c35312d23/.meta.tmp' to config b'/volumes/_nogroup/cafe6b35-cbcb-405c-8bdc-6e0c35312d23/.meta' Nov 23 05:03:31 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cafe6b35-cbcb-405c-8bdc-6e0c35312d23, vol_name:cephfs) < "" Nov 23 05:03:31 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cafe6b35-cbcb-405c-8bdc-6e0c35312d23", "format": "json"}]: dispatch Nov 23 05:03:31 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cafe6b35-cbcb-405c-8bdc-6e0c35312d23, vol_name:cephfs) < "" Nov 23 05:03:31 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cafe6b35-cbcb-405c-8bdc-6e0c35312d23, vol_name:cephfs) < "" Nov 23 05:03:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:03:31 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:03:31 localhost dnsmasq[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/addn_hosts - 1 addresses Nov 23 05:03:31 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/host Nov 23 05:03:31 localhost podman[319374]: 2025-11-23 10:03:31.367519556 +0000 UTC m=+0.059428984 container kill afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:03:31 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/opts Nov 23 05:03:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v310: 177 pgs: 177 active+clean; 238 MiB data, 981 MiB used, 41 GiB / 42 GiB avail; 5.4 MiB/s rd, 9.9 MiB/s wr, 107 op/s Nov 23 05:03:31 localhost dnsmasq[319036]: exiting on receipt of SIGTERM Nov 23 05:03:31 localhost systemd[1]: libpod-5770ee6f59985efa8a17e225125dd7154e38be7be5d522951adf4e0bac58ce2f.scope: Deactivated successfully. Nov 23 05:03:31 localhost podman[319402]: 2025-11-23 10:03:31.489093725 +0000 UTC m=+0.056431811 container kill 5770ee6f59985efa8a17e225125dd7154e38be7be5d522951adf4e0bac58ce2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c74d0c76-0cc1-4147-a4c3-201b1d65e72d, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0) Nov 23 05:03:31 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:31.520 262301 INFO neutron.agent.dhcp.agent [None req-35740536-afd2-464d-8278-4a7b7c70cdd2 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:30Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f2996cd3-d4c2-4d5d-996c-321b06df900e, ip_allocation=immediate, mac_address=fa:16:3e:f2:52:63, name=tempest-AllowedAddressPairTestJSON-541109221, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:25Z, description=, dns_domain=, id=53373f39-43fa-4ec3-9d41-940ffd04ef81, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-1660971929, port_security_enabled=True, project_id=6fc3fa728d6f4403acd9944d81eaeb18, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1995, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2266, status=ACTIVE, subnets=['ccd68745-58fd-47f5-b61d-a4973f54b25f'], tags=[], tenant_id=6fc3fa728d6f4403acd9944d81eaeb18, updated_at=2025-11-23T10:03:27Z, vlan_transparent=None, network_id=53373f39-43fa-4ec3-9d41-940ffd04ef81, port_security_enabled=True, project_id=6fc3fa728d6f4403acd9944d81eaeb18, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2'], standard_attr_id=2289, status=DOWN, tags=[], tenant_id=6fc3fa728d6f4403acd9944d81eaeb18, updated_at=2025-11-23T10:03:30Z on network 53373f39-43fa-4ec3-9d41-940ffd04ef81#033[00m Nov 23 05:03:31 localhost podman[319420]: 2025-11-23 10:03:31.559036232 +0000 UTC m=+0.053758969 container died 5770ee6f59985efa8a17e225125dd7154e38be7be5d522951adf4e0bac58ce2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c74d0c76-0cc1-4147-a4c3-201b1d65e72d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 05:03:31 localhost podman[319420]: 2025-11-23 10:03:31.595170326 +0000 UTC m=+0.089893033 container cleanup 5770ee6f59985efa8a17e225125dd7154e38be7be5d522951adf4e0bac58ce2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c74d0c76-0cc1-4147-a4c3-201b1d65e72d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:03:31 localhost systemd[1]: libpod-conmon-5770ee6f59985efa8a17e225125dd7154e38be7be5d522951adf4e0bac58ce2f.scope: Deactivated successfully. Nov 23 05:03:31 localhost podman[319424]: 2025-11-23 10:03:31.62225294 +0000 UTC m=+0.111289962 container remove 5770ee6f59985efa8a17e225125dd7154e38be7be5d522951adf4e0bac58ce2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c74d0c76-0cc1-4147-a4c3-201b1d65e72d, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 05:03:31 localhost systemd[1]: var-lib-containers-storage-overlay-903eeac753a067b212dd32b902599e7be1d645a4ff3d4b3eb59b3f8d38a2cf11-merged.mount: Deactivated successfully. Nov 23 05:03:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5770ee6f59985efa8a17e225125dd7154e38be7be5d522951adf4e0bac58ce2f-userdata-shm.mount: Deactivated successfully. Nov 23 05:03:31 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:31.733 262301 INFO neutron.agent.dhcp.agent [None req-b21248c7-8d33-4e08-96a7-1a87ff8ff45b - - - - - -] DHCP configuration for ports {'e9d0b7a0-c7b9-4609-9690-8966f250c6b7'} is completed#033[00m Nov 23 05:03:31 localhost systemd[1]: tmp-crun.IlngPV.mount: Deactivated successfully. Nov 23 05:03:31 localhost dnsmasq[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/addn_hosts - 2 addresses Nov 23 05:03:31 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/host Nov 23 05:03:31 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/opts Nov 23 05:03:31 localhost podman[319465]: 2025-11-23 10:03:31.738212916 +0000 UTC m=+0.069396141 container kill afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 05:03:31 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:31.854 2 INFO neutron.agent.securitygroups_rpc [None req-493a4cea-33ac-4b44-8c12-ef9399b27484 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m Nov 23 05:03:31 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:31.904 262301 INFO neutron.agent.dhcp.agent [None req-5b269414-3fdc-449b-a8b1-af244d3989bb - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:31 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:31.906 262301 INFO neutron.agent.dhcp.agent [None req-5b269414-3fdc-449b-a8b1-af244d3989bb - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:31 localhost systemd[1]: run-netns-qdhcp\x2dc74d0c76\x2d0cc1\x2d4147\x2da4c3\x2d201b1d65e72d.mount: Deactivated successfully. Nov 23 05:03:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:32.058 262301 INFO neutron.agent.dhcp.agent [None req-60e995ed-c298-413f-a784-9ef0a5c4ea1b - - - - - -] DHCP configuration for ports {'f2996cd3-d4c2-4d5d-996c-321b06df900e'} is completed#033[00m Nov 23 05:03:32 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e45: np0005532584.naxwxy(active, since 8m), standbys: np0005532585.gzafiw, np0005532586.thmvqb Nov 23 05:03:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:32.082 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:32 localhost dnsmasq[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/addn_hosts - 1 addresses Nov 23 05:03:32 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/host Nov 23 05:03:32 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/opts Nov 23 05:03:32 localhost podman[319502]: 2025-11-23 10:03:32.183902659 +0000 UTC m=+0.061144227 container kill afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 05:03:32 localhost nova_compute[280939]: 2025-11-23 10:03:32.356 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:32 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:32.397 2 INFO neutron.agent.securitygroups_rpc [None req-88038149-5d7c-44ef-b630-13d9beb8c240 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m Nov 23 05:03:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:32.420 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:32Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f355f90c-ae16-4efb-a1c4-51a438f559f5, ip_allocation=immediate, mac_address=fa:16:3e:78:54:35, name=tempest-AllowedAddressPairTestJSON-1797543677, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:25Z, description=, dns_domain=, id=53373f39-43fa-4ec3-9d41-940ffd04ef81, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-1660971929, port_security_enabled=True, project_id=6fc3fa728d6f4403acd9944d81eaeb18, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1995, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2266, status=ACTIVE, subnets=['ccd68745-58fd-47f5-b61d-a4973f54b25f'], tags=[], tenant_id=6fc3fa728d6f4403acd9944d81eaeb18, updated_at=2025-11-23T10:03:27Z, vlan_transparent=None, network_id=53373f39-43fa-4ec3-9d41-940ffd04ef81, port_security_enabled=True, project_id=6fc3fa728d6f4403acd9944d81eaeb18, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2'], standard_attr_id=2294, status=DOWN, tags=[], tenant_id=6fc3fa728d6f4403acd9944d81eaeb18, updated_at=2025-11-23T10:03:32Z on network 53373f39-43fa-4ec3-9d41-940ffd04ef81#033[00m Nov 23 05:03:32 localhost nova_compute[280939]: 2025-11-23 10:03:32.520 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:32 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "cafe6b35-cbcb-405c-8bdc-6e0c35312d23", "snap_name": "019c4f4e-7af3-4cd6-b0e3-b6cacc762334", "format": "json"}]: dispatch Nov 23 05:03:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:019c4f4e-7af3-4cd6-b0e3-b6cacc762334, sub_name:cafe6b35-cbcb-405c-8bdc-6e0c35312d23, vol_name:cephfs) < "" Nov 23 05:03:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:019c4f4e-7af3-4cd6-b0e3-b6cacc762334, sub_name:cafe6b35-cbcb-405c-8bdc-6e0c35312d23, vol_name:cephfs) < "" Nov 23 05:03:32 localhost dnsmasq[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/addn_hosts - 2 addresses Nov 23 05:03:32 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/host Nov 23 05:03:32 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/opts Nov 23 05:03:32 localhost podman[319539]: 2025-11-23 10:03:32.665099757 +0000 UTC m=+0.068385311 container kill afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:03:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:32.968 262301 INFO neutron.agent.dhcp.agent [None req-da08a37e-7382-499a-8392-8fd64210561a - - - - - -] DHCP configuration for ports {'f355f90c-ae16-4efb-a1c4-51a438f559f5'} is completed#033[00m Nov 23 05:03:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:03:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e150 do_prune osdmap full prune enabled Nov 23 05:03:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e151 e151: 6 total, 6 up, 6 in Nov 23 05:03:33 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e151: 6 total, 6 up, 6 in Nov 23 05:03:33 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:33.425 2 INFO neutron.agent.securitygroups_rpc [None req-6155fed5-7bb5-4f1e-ab16-bb23a0937b77 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m Nov 23 05:03:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v312: 177 pgs: 177 active+clean; 238 MiB data, 981 MiB used, 41 GiB / 42 GiB avail; 4.8 MiB/s rd, 8.8 MiB/s wr, 96 op/s Nov 23 05:03:33 localhost dnsmasq[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/addn_hosts - 1 addresses Nov 23 05:03:33 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/host Nov 23 05:03:33 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/opts Nov 23 05:03:33 localhost systemd[1]: tmp-crun.uFu0Q0.mount: Deactivated successfully. Nov 23 05:03:33 localhost podman[319574]: 2025-11-23 10:03:33.659033393 +0000 UTC m=+0.058164824 container kill afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3) Nov 23 05:03:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e151 do_prune osdmap full prune enabled Nov 23 05:03:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e152 e152: 6 total, 6 up, 6 in Nov 23 05:03:34 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e152: 6 total, 6 up, 6 in Nov 23 05:03:34 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:34.201 262301 INFO neutron.agent.linux.ip_lib [None req-c96251c1-e169-468a-b62f-79a6c865e792 - - - - - -] Device tap1fc0640d-ff cannot be used as it has no MAC address#033[00m Nov 23 05:03:34 localhost nova_compute[280939]: 2025-11-23 10:03:34.225 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:34 localhost kernel: device tap1fc0640d-ff entered promiscuous mode Nov 23 05:03:34 localhost NetworkManager[5966]: [1763892214.2334] manager: (tap1fc0640d-ff): new Generic device (/org/freedesktop/NetworkManager/Devices/46) Nov 23 05:03:34 localhost ovn_controller[153771]: 2025-11-23T10:03:34Z|00269|binding|INFO|Claiming lport 1fc0640d-ff36-40bb-ad44-d26feda4fb67 for this chassis. Nov 23 05:03:34 localhost ovn_controller[153771]: 2025-11-23T10:03:34Z|00270|binding|INFO|1fc0640d-ff36-40bb-ad44-d26feda4fb67: Claiming unknown Nov 23 05:03:34 localhost nova_compute[280939]: 2025-11-23 10:03:34.236 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:34 localhost systemd-udevd[319605]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:03:34 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:34.244 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-dd3ae3d3-fc6f-47ba-be79-2e7755c86808', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd3ae3d3-fc6f-47ba-be79-2e7755c86808', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6cc558ab5ea444ca89055d39fcd5b762', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=280d33f0-8edf-47ca-95fe-5bd97384aac3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1fc0640d-ff36-40bb-ad44-d26feda4fb67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:03:34 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:34.246 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 1fc0640d-ff36-40bb-ad44-d26feda4fb67 in datapath dd3ae3d3-fc6f-47ba-be79-2e7755c86808 bound to our chassis#033[00m Nov 23 05:03:34 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:34.249 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dd3ae3d3-fc6f-47ba-be79-2e7755c86808 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:03:34 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:34.250 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[124aef29-cff7-4417-a748-bcf6b4c677a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:03:34 localhost journal[229336]: ethtool ioctl error on tap1fc0640d-ff: No such device Nov 23 05:03:34 localhost journal[229336]: ethtool ioctl error on tap1fc0640d-ff: No such device Nov 23 05:03:34 localhost nova_compute[280939]: 2025-11-23 10:03:34.267 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:34 localhost ovn_controller[153771]: 2025-11-23T10:03:34Z|00271|binding|INFO|Setting lport 1fc0640d-ff36-40bb-ad44-d26feda4fb67 ovn-installed in OVS Nov 23 05:03:34 localhost ovn_controller[153771]: 2025-11-23T10:03:34Z|00272|binding|INFO|Setting lport 1fc0640d-ff36-40bb-ad44-d26feda4fb67 up in Southbound Nov 23 05:03:34 localhost journal[229336]: ethtool ioctl error on tap1fc0640d-ff: No such device Nov 23 05:03:34 localhost journal[229336]: ethtool ioctl error on tap1fc0640d-ff: No such device Nov 23 05:03:34 localhost journal[229336]: ethtool ioctl error on tap1fc0640d-ff: No such device Nov 23 05:03:34 localhost journal[229336]: ethtool ioctl error on tap1fc0640d-ff: No such device Nov 23 05:03:34 localhost journal[229336]: ethtool ioctl error on tap1fc0640d-ff: No such device Nov 23 05:03:34 localhost journal[229336]: ethtool ioctl error on tap1fc0640d-ff: No such device Nov 23 05:03:34 localhost nova_compute[280939]: 2025-11-23 10:03:34.309 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:34 localhost nova_compute[280939]: 2025-11-23 10:03:34.338 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:34 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:34.751 2 INFO neutron.agent.securitygroups_rpc [None req-0eb2e2be-3d24-4ace-a008-a7d7e29c4328 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m Nov 23 05:03:34 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:34.904 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:33Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=eb7489ec-f063-485b-b192-465df25d4b96, ip_allocation=immediate, mac_address=fa:16:3e:70:14:d9, name=tempest-AllowedAddressPairTestJSON-1101929225, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:25Z, description=, dns_domain=, id=53373f39-43fa-4ec3-9d41-940ffd04ef81, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-1660971929, port_security_enabled=True, project_id=6fc3fa728d6f4403acd9944d81eaeb18, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1995, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2266, status=ACTIVE, subnets=['ccd68745-58fd-47f5-b61d-a4973f54b25f'], tags=[], tenant_id=6fc3fa728d6f4403acd9944d81eaeb18, updated_at=2025-11-23T10:03:27Z, vlan_transparent=None, network_id=53373f39-43fa-4ec3-9d41-940ffd04ef81, port_security_enabled=True, project_id=6fc3fa728d6f4403acd9944d81eaeb18, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2'], standard_attr_id=2300, status=DOWN, tags=[], tenant_id=6fc3fa728d6f4403acd9944d81eaeb18, updated_at=2025-11-23T10:03:34Z on network 53373f39-43fa-4ec3-9d41-940ffd04ef81#033[00m Nov 23 05:03:35 localhost dnsmasq[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/addn_hosts - 2 addresses Nov 23 05:03:35 localhost podman[319688]: 2025-11-23 10:03:35.119362592 +0000 UTC m=+0.062071195 container kill afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Nov 23 05:03:35 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/host Nov 23 05:03:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e152 do_prune osdmap full prune enabled Nov 23 05:03:35 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/opts Nov 23 05:03:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:03:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e153 e153: 6 total, 6 up, 6 in Nov 23 05:03:35 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e153: 6 total, 6 up, 6 in Nov 23 05:03:35 localhost podman[319705]: Nov 23 05:03:35 localhost podman[319705]: 2025-11-23 10:03:35.229807487 +0000 UTC m=+0.111355855 container create 41a3ef1e862e9fe2eac446d0dc5cbb5d67c6552b9ce9f4ae6fc14dafd88ed300 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dd3ae3d3-fc6f-47ba-be79-2e7755c86808, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:03:35 localhost podman[319712]: 2025-11-23 10:03:35.250833976 +0000 UTC m=+0.108380423 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, release=1755695350, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.6) Nov 23 05:03:35 localhost podman[319712]: 2025-11-23 10:03:35.260169953 +0000 UTC m=+0.117716390 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, version=9.6, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container) Nov 23 05:03:35 localhost systemd[1]: Started libpod-conmon-41a3ef1e862e9fe2eac446d0dc5cbb5d67c6552b9ce9f4ae6fc14dafd88ed300.scope. Nov 23 05:03:35 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:03:35 localhost podman[319705]: 2025-11-23 10:03:35.186818232 +0000 UTC m=+0.068366640 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:03:35 localhost systemd[1]: Started libcrun container. Nov 23 05:03:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/199619c0eeb5ef9eed2a293ac45c30ecfbe973c491ff6e0939e7798b7ab3b346/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:03:35 localhost podman[319705]: 2025-11-23 10:03:35.307530394 +0000 UTC m=+0.189078762 container init 41a3ef1e862e9fe2eac446d0dc5cbb5d67c6552b9ce9f4ae6fc14dafd88ed300 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dd3ae3d3-fc6f-47ba-be79-2e7755c86808, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:03:35 localhost podman[319705]: 2025-11-23 10:03:35.314351604 +0000 UTC m=+0.195899962 container start 41a3ef1e862e9fe2eac446d0dc5cbb5d67c6552b9ce9f4ae6fc14dafd88ed300 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dd3ae3d3-fc6f-47ba-be79-2e7755c86808, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:03:35 localhost dnsmasq[319752]: started, version 2.85 cachesize 150 Nov 23 05:03:35 localhost dnsmasq[319752]: DNS service limited to local subnets Nov 23 05:03:35 localhost dnsmasq[319752]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:03:35 localhost dnsmasq[319752]: warning: no upstream servers configured Nov 23 05:03:35 localhost dnsmasq-dhcp[319752]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 23 05:03:35 localhost dnsmasq[319752]: read /var/lib/neutron/dhcp/dd3ae3d3-fc6f-47ba-be79-2e7755c86808/addn_hosts - 0 addresses Nov 23 05:03:35 localhost dnsmasq-dhcp[319752]: read /var/lib/neutron/dhcp/dd3ae3d3-fc6f-47ba-be79-2e7755c86808/host Nov 23 05:03:35 localhost dnsmasq-dhcp[319752]: read /var/lib/neutron/dhcp/dd3ae3d3-fc6f-47ba-be79-2e7755c86808/opts Nov 23 05:03:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v315: 177 pgs: 177 active+clean; 284 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 4.9 MiB/s rd, 4.8 MiB/s wr, 188 op/s Nov 23 05:03:35 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:35.506 262301 INFO neutron.agent.dhcp.agent [None req-657f9846-dd51-45db-98aa-0f5391093315 - - - - - -] DHCP configuration for ports {'eb7489ec-f063-485b-b192-465df25d4b96', '5eb02044-56ed-4432-bc1f-9e25ca66b426'} is completed#033[00m Nov 23 05:03:35 localhost ovn_controller[153771]: 2025-11-23T10:03:35Z|00273|binding|INFO|Removing iface tap1fc0640d-ff ovn-installed in OVS Nov 23 05:03:35 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:35.626 159415 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 70d4d7ae-7cfb-4f5c-b861-f733eb441d14 with type ""#033[00m Nov 23 05:03:35 localhost ovn_controller[153771]: 2025-11-23T10:03:35Z|00274|binding|INFO|Removing lport 1fc0640d-ff36-40bb-ad44-d26feda4fb67 ovn-installed in OVS Nov 23 05:03:35 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:35.629 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-dd3ae3d3-fc6f-47ba-be79-2e7755c86808', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd3ae3d3-fc6f-47ba-be79-2e7755c86808', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6cc558ab5ea444ca89055d39fcd5b762', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=280d33f0-8edf-47ca-95fe-5bd97384aac3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1fc0640d-ff36-40bb-ad44-d26feda4fb67) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:03:35 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:35.631 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 1fc0640d-ff36-40bb-ad44-d26feda4fb67 in datapath dd3ae3d3-fc6f-47ba-be79-2e7755c86808 unbound from our chassis#033[00m Nov 23 05:03:35 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:35.634 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dd3ae3d3-fc6f-47ba-be79-2e7755c86808, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:03:35 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:35.635 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[4f17fe9d-cebf-47e4-9822-a03a52fab543]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:03:35 localhost dnsmasq[319752]: read /var/lib/neutron/dhcp/dd3ae3d3-fc6f-47ba-be79-2e7755c86808/addn_hosts - 0 addresses Nov 23 05:03:35 localhost dnsmasq-dhcp[319752]: read /var/lib/neutron/dhcp/dd3ae3d3-fc6f-47ba-be79-2e7755c86808/host Nov 23 05:03:35 localhost dnsmasq-dhcp[319752]: read /var/lib/neutron/dhcp/dd3ae3d3-fc6f-47ba-be79-2e7755c86808/opts Nov 23 05:03:35 localhost nova_compute[280939]: 2025-11-23 10:03:35.671 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:35 localhost podman[319770]: 2025-11-23 10:03:35.672179867 +0000 UTC m=+0.060922500 container kill 41a3ef1e862e9fe2eac446d0dc5cbb5d67c6552b9ce9f4ae6fc14dafd88ed300 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dd3ae3d3-fc6f-47ba-be79-2e7755c86808, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 05:03:35 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:35.875 262301 INFO neutron.agent.dhcp.agent [None req-82935c96-0098-424b-b405-f63d5c1fa748 - - - - - -] DHCP configuration for ports {'1fc0640d-ff36-40bb-ad44-d26feda4fb67', '5eb02044-56ed-4432-bc1f-9e25ca66b426'} is completed#033[00m Nov 23 05:03:35 localhost dnsmasq[319752]: exiting on receipt of SIGTERM Nov 23 05:03:35 localhost podman[319807]: 2025-11-23 10:03:35.977725799 +0000 UTC m=+0.060941961 container kill 41a3ef1e862e9fe2eac446d0dc5cbb5d67c6552b9ce9f4ae6fc14dafd88ed300 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dd3ae3d3-fc6f-47ba-be79-2e7755c86808, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 05:03:35 localhost systemd[1]: libpod-41a3ef1e862e9fe2eac446d0dc5cbb5d67c6552b9ce9f4ae6fc14dafd88ed300.scope: Deactivated successfully. Nov 23 05:03:36 localhost podman[319820]: 2025-11-23 10:03:36.049326917 +0000 UTC m=+0.055676878 container died 41a3ef1e862e9fe2eac446d0dc5cbb5d67c6552b9ce9f4ae6fc14dafd88ed300 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dd3ae3d3-fc6f-47ba-be79-2e7755c86808, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118) Nov 23 05:03:36 localhost podman[319820]: 2025-11-23 10:03:36.078540427 +0000 UTC m=+0.084890328 container cleanup 41a3ef1e862e9fe2eac446d0dc5cbb5d67c6552b9ce9f4ae6fc14dafd88ed300 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dd3ae3d3-fc6f-47ba-be79-2e7755c86808, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:03:36 localhost systemd[1]: libpod-conmon-41a3ef1e862e9fe2eac446d0dc5cbb5d67c6552b9ce9f4ae6fc14dafd88ed300.scope: Deactivated successfully. Nov 23 05:03:36 localhost nova_compute[280939]: 2025-11-23 10:03:36.108 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:36 localhost systemd[1]: var-lib-containers-storage-overlay-199619c0eeb5ef9eed2a293ac45c30ecfbe973c491ff6e0939e7798b7ab3b346-merged.mount: Deactivated successfully. Nov 23 05:03:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-41a3ef1e862e9fe2eac446d0dc5cbb5d67c6552b9ce9f4ae6fc14dafd88ed300-userdata-shm.mount: Deactivated successfully. Nov 23 05:03:36 localhost podman[319822]: 2025-11-23 10:03:36.127236288 +0000 UTC m=+0.128382789 container remove 41a3ef1e862e9fe2eac446d0dc5cbb5d67c6552b9ce9f4ae6fc14dafd88ed300 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dd3ae3d3-fc6f-47ba-be79-2e7755c86808, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:03:36 localhost nova_compute[280939]: 2025-11-23 10:03:36.141 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:36 localhost kernel: device tap1fc0640d-ff left promiscuous mode Nov 23 05:03:36 localhost nova_compute[280939]: 2025-11-23 10:03:36.153 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:36 localhost systemd[1]: run-netns-qdhcp\x2ddd3ae3d3\x2dfc6f\x2d47ba\x2dbe79\x2d2e7755c86808.mount: Deactivated successfully. Nov 23 05:03:36 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:36.169 262301 INFO neutron.agent.dhcp.agent [None req-b3735b65-9a25-4138-b9f7-4fb19bc1cdf4 - - - - - -] Synchronizing state#033[00m Nov 23 05:03:36 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:36.190 2 INFO neutron.agent.securitygroups_rpc [None req-a71f4ada-2ee4-4e29-a05b-0c137a49cc85 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m Nov 23 05:03:36 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:36.426 262301 INFO neutron.agent.dhcp.agent [None req-d2b7e70a-5b7d-4ba2-9dc3-a5516ee5610c - - - - - -] All active networks have been fetched through RPC.#033[00m Nov 23 05:03:36 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:36.428 262301 INFO neutron.agent.dhcp.agent [-] Starting network 881c97fa-7871-450c-b8fd-eb49d33b3df5 dhcp configuration#033[00m Nov 23 05:03:36 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:36.429 262301 INFO neutron.agent.dhcp.agent [-] Finished network 881c97fa-7871-450c-b8fd-eb49d33b3df5 dhcp configuration#033[00m Nov 23 05:03:36 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:36.429 262301 INFO neutron.agent.dhcp.agent [-] Starting network dd3ae3d3-fc6f-47ba-be79-2e7755c86808 dhcp configuration#033[00m Nov 23 05:03:36 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:36.441 2 INFO neutron.agent.securitygroups_rpc [None req-606c84ff-d0b3-4da5-8d08-f4db462a1bb4 a8a12d646f734219a5736bd9a89106d3 cd27ceae55c44d478998092e7554fd8a - - default default] Security group member updated ['57d92d06-0a9a-469b-b69f-4fb9e6e560cf']#033[00m Nov 23 05:03:36 localhost openstack_network_exporter[241732]: ERROR 10:03:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:03:36 localhost openstack_network_exporter[241732]: ERROR 10:03:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:03:36 localhost openstack_network_exporter[241732]: ERROR 10:03:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:03:36 localhost openstack_network_exporter[241732]: ERROR 10:03:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:03:36 localhost openstack_network_exporter[241732]: Nov 23 05:03:36 localhost openstack_network_exporter[241732]: ERROR 10:03:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:03:36 localhost openstack_network_exporter[241732]: Nov 23 05:03:36 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:36.977 2 INFO neutron.agent.securitygroups_rpc [None req-f95b4467-0d9c-4e77-aa60-8f1596442e50 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m Nov 23 05:03:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:37.005 262301 INFO neutron.agent.dhcp.agent [None req-51a3540c-7588-42b5-a5d2-00e3694e076f - - - - - -] Finished network dd3ae3d3-fc6f-47ba-be79-2e7755c86808 dhcp configuration#033[00m Nov 23 05:03:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:37.006 262301 INFO neutron.agent.dhcp.agent [None req-d2b7e70a-5b7d-4ba2-9dc3-a5516ee5610c - - - - - -] Synchronizing state complete#033[00m Nov 23 05:03:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:37.006 262301 INFO neutron.agent.dhcp.agent [None req-d2b7e70a-5b7d-4ba2-9dc3-a5516ee5610c - - - - - -] Synchronizing state#033[00m Nov 23 05:03:37 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cafe6b35-cbcb-405c-8bdc-6e0c35312d23", "snap_name": "019c4f4e-7af3-4cd6-b0e3-b6cacc762334_c6b7c25b-ee07-417d-94a7-231549eff0ad", "force": true, "format": "json"}]: dispatch Nov 23 05:03:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:019c4f4e-7af3-4cd6-b0e3-b6cacc762334_c6b7c25b-ee07-417d-94a7-231549eff0ad, sub_name:cafe6b35-cbcb-405c-8bdc-6e0c35312d23, vol_name:cephfs) < "" Nov 23 05:03:37 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cafe6b35-cbcb-405c-8bdc-6e0c35312d23/.meta.tmp' Nov 23 05:03:37 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cafe6b35-cbcb-405c-8bdc-6e0c35312d23/.meta.tmp' to config b'/volumes/_nogroup/cafe6b35-cbcb-405c-8bdc-6e0c35312d23/.meta' Nov 23 05:03:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:019c4f4e-7af3-4cd6-b0e3-b6cacc762334_c6b7c25b-ee07-417d-94a7-231549eff0ad, sub_name:cafe6b35-cbcb-405c-8bdc-6e0c35312d23, vol_name:cephfs) < "" Nov 23 05:03:37 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cafe6b35-cbcb-405c-8bdc-6e0c35312d23", "snap_name": "019c4f4e-7af3-4cd6-b0e3-b6cacc762334", "force": true, "format": "json"}]: dispatch Nov 23 05:03:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:019c4f4e-7af3-4cd6-b0e3-b6cacc762334, sub_name:cafe6b35-cbcb-405c-8bdc-6e0c35312d23, vol_name:cephfs) < "" Nov 23 05:03:37 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cafe6b35-cbcb-405c-8bdc-6e0c35312d23/.meta.tmp' Nov 23 05:03:37 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cafe6b35-cbcb-405c-8bdc-6e0c35312d23/.meta.tmp' to config b'/volumes/_nogroup/cafe6b35-cbcb-405c-8bdc-6e0c35312d23/.meta' Nov 23 05:03:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:37.111 262301 INFO neutron.agent.dhcp.agent [None req-5d88389d-5d8a-422d-81ef-8a2439e79d87 - - - - - -] DHCP configuration for ports {'5eb02044-56ed-4432-bc1f-9e25ca66b426'} is completed#033[00m Nov 23 05:03:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:019c4f4e-7af3-4cd6-b0e3-b6cacc762334, sub_name:cafe6b35-cbcb-405c-8bdc-6e0c35312d23, vol_name:cephfs) < "" Nov 23 05:03:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e153 do_prune osdmap full prune enabled Nov 23 05:03:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e154 e154: 6 total, 6 up, 6 in Nov 23 05:03:37 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e154: 6 total, 6 up, 6 in Nov 23 05:03:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:37.265 262301 INFO neutron.agent.dhcp.agent [None req-1dfd3e53-f419-4f69-8164-c0ffc1916169 - - - - - -] All active networks have been fetched through RPC.#033[00m Nov 23 05:03:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:37.267 262301 INFO neutron.agent.dhcp.agent [-] Starting network 881c97fa-7871-450c-b8fd-eb49d33b3df5 dhcp configuration#033[00m Nov 23 05:03:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:37.267 262301 INFO neutron.agent.dhcp.agent [-] Finished network 881c97fa-7871-450c-b8fd-eb49d33b3df5 dhcp configuration#033[00m Nov 23 05:03:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:37.268 262301 INFO neutron.agent.dhcp.agent [-] Starting network dd3ae3d3-fc6f-47ba-be79-2e7755c86808 dhcp configuration#033[00m Nov 23 05:03:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:37.268 262301 INFO neutron.agent.dhcp.agent [-] Finished network dd3ae3d3-fc6f-47ba-be79-2e7755c86808 dhcp configuration#033[00m Nov 23 05:03:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:37.269 262301 INFO neutron.agent.dhcp.agent [None req-1dfd3e53-f419-4f69-8164-c0ffc1916169 - - - - - -] Synchronizing state complete#033[00m Nov 23 05:03:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:37.339 262301 INFO neutron.agent.dhcp.agent [None req-d7c6e052-d434-412e-8d8a-495943870c3a - - - - - -] DHCP configuration for ports {'5eb02044-56ed-4432-bc1f-9e25ca66b426'} is completed#033[00m Nov 23 05:03:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v317: 177 pgs: 177 active+clean; 284 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 5.0 MiB/s rd, 4.9 MiB/s wr, 190 op/s Nov 23 05:03:37 localhost dnsmasq[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/addn_hosts - 1 addresses Nov 23 05:03:37 localhost podman[319863]: 2025-11-23 10:03:37.490548126 +0000 UTC m=+0.062047744 container kill afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 05:03:37 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/host Nov 23 05:03:37 localhost systemd[1]: tmp-crun.X9zOep.mount: Deactivated successfully. Nov 23 05:03:37 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/opts Nov 23 05:03:37 localhost nova_compute[280939]: 2025-11-23 10:03:37.523 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:37.597 262301 INFO neutron.agent.dhcp.agent [None req-fdcb8ede-c82d-40cb-89f4-38a9fec57204 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:37 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:37.611 2 INFO neutron.agent.securitygroups_rpc [None req-dffb97f7-0929-42aa-ac92-4890704581f2 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m Nov 23 05:03:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:37.652 262301 INFO neutron.agent.dhcp.agent [None req-7d50b9b9-f113-4a91-9fe0-7b6c06a5bd54 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:36Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=290db817-0452-4601-a27e-1a4ee338738d, ip_allocation=immediate, mac_address=fa:16:3e:50:86:14, name=tempest-AllowedAddressPairTestJSON-1697650660, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:25Z, description=, dns_domain=, id=53373f39-43fa-4ec3-9d41-940ffd04ef81, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-1660971929, port_security_enabled=True, project_id=6fc3fa728d6f4403acd9944d81eaeb18, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1995, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2266, status=ACTIVE, subnets=['ccd68745-58fd-47f5-b61d-a4973f54b25f'], tags=[], tenant_id=6fc3fa728d6f4403acd9944d81eaeb18, updated_at=2025-11-23T10:03:27Z, vlan_transparent=None, network_id=53373f39-43fa-4ec3-9d41-940ffd04ef81, port_security_enabled=True, project_id=6fc3fa728d6f4403acd9944d81eaeb18, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2'], standard_attr_id=2313, status=DOWN, tags=[], tenant_id=6fc3fa728d6f4403acd9944d81eaeb18, updated_at=2025-11-23T10:03:36Z on network 53373f39-43fa-4ec3-9d41-940ffd04ef81#033[00m Nov 23 05:03:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:37.690 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:37 localhost dnsmasq[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/addn_hosts - 2 addresses Nov 23 05:03:37 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/host Nov 23 05:03:37 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/opts Nov 23 05:03:37 localhost podman[319901]: 2025-11-23 10:03:37.866542849 +0000 UTC m=+0.064243022 container kill afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118) Nov 23 05:03:37 localhost nova_compute[280939]: 2025-11-23 10:03:37.920 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:03:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e154 do_prune osdmap full prune enabled Nov 23 05:03:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e155 e155: 6 total, 6 up, 6 in Nov 23 05:03:37 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e155: 6 total, 6 up, 6 in Nov 23 05:03:38 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:38.045 262301 INFO neutron.agent.dhcp.agent [None req-de59c630-a404-4c5b-86e9-10cf85b07ce4 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:37Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a8e6195e-4bba-4efa-b87d-1e13ff67da63, ip_allocation=immediate, mac_address=fa:16:3e:d4:b4:0d, name=tempest-AllowedAddressPairTestJSON-577604383, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:25Z, description=, dns_domain=, id=53373f39-43fa-4ec3-9d41-940ffd04ef81, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-1660971929, port_security_enabled=True, project_id=6fc3fa728d6f4403acd9944d81eaeb18, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1995, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2266, status=ACTIVE, subnets=['ccd68745-58fd-47f5-b61d-a4973f54b25f'], tags=[], tenant_id=6fc3fa728d6f4403acd9944d81eaeb18, updated_at=2025-11-23T10:03:27Z, vlan_transparent=None, network_id=53373f39-43fa-4ec3-9d41-940ffd04ef81, port_security_enabled=True, project_id=6fc3fa728d6f4403acd9944d81eaeb18, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2'], standard_attr_id=2315, status=DOWN, tags=[], tenant_id=6fc3fa728d6f4403acd9944d81eaeb18, updated_at=2025-11-23T10:03:37Z on network 53373f39-43fa-4ec3-9d41-940ffd04ef81#033[00m Nov 23 05:03:38 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:38.118 262301 INFO neutron.agent.dhcp.agent [None req-7a62a72c-a6ea-4082-81e0-54854e4fcdca - - - - - -] DHCP configuration for ports {'290db817-0452-4601-a27e-1a4ee338738d'} is completed#033[00m Nov 23 05:03:38 localhost dnsmasq[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/addn_hosts - 3 addresses Nov 23 05:03:38 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/host Nov 23 05:03:38 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/opts Nov 23 05:03:38 localhost podman[319940]: 2025-11-23 10:03:38.288074367 +0000 UTC m=+0.062900771 container kill afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS) Nov 23 05:03:38 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:38.335 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:38 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:38.960 262301 INFO neutron.agent.dhcp.agent [None req-cc76f1bd-dc46-4829-853d-0998d92c8bb7 - - - - - -] DHCP configuration for ports {'a8e6195e-4bba-4efa-b87d-1e13ff67da63'} is completed#033[00m Nov 23 05:03:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e155 do_prune osdmap full prune enabled Nov 23 05:03:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e156 e156: 6 total, 6 up, 6 in Nov 23 05:03:39 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e156: 6 total, 6 up, 6 in Nov 23 05:03:39 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:39.216 2 INFO neutron.agent.securitygroups_rpc [None req-9d87b2e2-86da-485b-bbea-cfc6692748e1 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m Nov 23 05:03:39 localhost dnsmasq[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/addn_hosts - 2 addresses Nov 23 05:03:39 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/host Nov 23 05:03:39 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/opts Nov 23 05:03:39 localhost podman[319977]: 2025-11-23 10:03:39.451544881 +0000 UTC m=+0.062359093 container kill afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true) Nov 23 05:03:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v320: 177 pgs: 177 active+clean; 192 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 175 KiB/s rd, 24 KiB/s wr, 243 op/s Nov 23 05:03:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:03:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:03:39 localhost podman[319992]: 2025-11-23 10:03:39.563934827 +0000 UTC m=+0.080164662 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:03:39 localhost podman[319992]: 2025-11-23 10:03:39.576428402 +0000 UTC m=+0.092658288 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:03:39 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:03:39 localhost podman[319991]: 2025-11-23 10:03:39.670963457 +0000 UTC m=+0.190594668 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Nov 23 05:03:39 localhost podman[319991]: 2025-11-23 10:03:39.685367152 +0000 UTC m=+0.204998303 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 23 05:03:39 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:03:39 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:39.944 2 INFO neutron.agent.securitygroups_rpc [None req-0cc7d2c8-7386-4953-9ec2-42e302a377e6 a8a12d646f734219a5736bd9a89106d3 cd27ceae55c44d478998092e7554fd8a - - default default] Security group member updated ['57d92d06-0a9a-469b-b69f-4fb9e6e560cf']#033[00m Nov 23 05:03:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e156 do_prune osdmap full prune enabled Nov 23 05:03:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e157 e157: 6 total, 6 up, 6 in Nov 23 05:03:40 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e157: 6 total, 6 up, 6 in Nov 23 05:03:40 localhost dnsmasq[318905]: exiting on receipt of SIGTERM Nov 23 05:03:40 localhost podman[320111]: 2025-11-23 10:03:40.290127248 +0000 UTC m=+0.052501309 container kill 9283b210ce4f818b1807fe592a88973442fe5d620377c42e26781e5bdbe37a18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Nov 23 05:03:40 localhost systemd[1]: libpod-9283b210ce4f818b1807fe592a88973442fe5d620377c42e26781e5bdbe37a18.scope: Deactivated successfully. Nov 23 05:03:40 localhost podman[320130]: 2025-11-23 10:03:40.348727755 +0000 UTC m=+0.041576692 container died 9283b210ce4f818b1807fe592a88973442fe5d620377c42e26781e5bdbe37a18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 23 05:03:40 localhost podman[320130]: 2025-11-23 10:03:40.391619408 +0000 UTC m=+0.084468315 container remove 9283b210ce4f818b1807fe592a88973442fe5d620377c42e26781e5bdbe37a18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:03:40 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cafe6b35-cbcb-405c-8bdc-6e0c35312d23", "format": "json"}]: dispatch Nov 23 05:03:40 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cafe6b35-cbcb-405c-8bdc-6e0c35312d23, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:03:40 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cafe6b35-cbcb-405c-8bdc-6e0c35312d23, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:03:40 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cafe6b35-cbcb-405c-8bdc-6e0c35312d23' of type subvolume Nov 23 05:03:40 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:03:40.395+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cafe6b35-cbcb-405c-8bdc-6e0c35312d23' of type subvolume Nov 23 05:03:40 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cafe6b35-cbcb-405c-8bdc-6e0c35312d23", "force": true, "format": "json"}]: dispatch Nov 23 05:03:40 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cafe6b35-cbcb-405c-8bdc-6e0c35312d23, vol_name:cephfs) < "" Nov 23 05:03:40 localhost systemd[1]: libpod-conmon-9283b210ce4f818b1807fe592a88973442fe5d620377c42e26781e5bdbe37a18.scope: Deactivated successfully. Nov 23 05:03:40 localhost systemd[1]: var-lib-containers-storage-overlay-e4f22bed24036c7e514196d80dfa96fc5807bffa371b61c03b28691c13d521fa-merged.mount: Deactivated successfully. Nov 23 05:03:40 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9283b210ce4f818b1807fe592a88973442fe5d620377c42e26781e5bdbe37a18-userdata-shm.mount: Deactivated successfully. Nov 23 05:03:40 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cafe6b35-cbcb-405c-8bdc-6e0c35312d23'' moved to trashcan Nov 23 05:03:40 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:03:40 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cafe6b35-cbcb-405c-8bdc-6e0c35312d23, vol_name:cephfs) < "" Nov 23 05:03:40 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:03:40.501+0000 7f9ccab0e640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:03:40.501+0000 7f9ccab0e640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:03:40.501+0000 7f9ccab0e640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:03:40.501+0000 7f9ccab0e640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:03:40.501+0000 7f9ccab0e640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:40.586 2 INFO neutron.agent.securitygroups_rpc [None req-bd23e853-4daf-4921-98db-bb97f636a505 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m Nov 23 05:03:40 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:03:40.602+0000 7f9cc9b0c640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:03:40.602+0000 7f9cc9b0c640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:03:40.602+0000 7f9cc9b0c640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:03:40.602+0000 7f9cc9b0c640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:03:40.602+0000 7f9cc9b0c640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:03:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 05:03:40 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 05:03:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 05:03:40 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:03:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 05:03:40 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:03:40 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev 326d19f9-72f4-40b5-90fe-53d87e1c1cad (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:03:40 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev 326d19f9-72f4-40b5-90fe-53d87e1c1cad (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:03:40 localhost ceph-mgr[286671]: [progress INFO root] Completed event 326d19f9-72f4-40b5-90fe-53d87e1c1cad (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 05:03:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 05:03:40 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 05:03:40 localhost dnsmasq[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/addn_hosts - 1 addresses Nov 23 05:03:40 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/host Nov 23 05:03:40 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/opts Nov 23 05:03:40 localhost podman[320249]: 2025-11-23 10:03:40.864594792 +0000 UTC m=+0.058638129 container kill afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true) Nov 23 05:03:41 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:03:41 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:03:41 localhost nova_compute[280939]: 2025-11-23 10:03:41.153 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:41 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:41.213 2 INFO neutron.agent.securitygroups_rpc [None req-8d10cea8-bd4a-4441-b879-9913f6e3c03c 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:03:41 localhost podman[320294]: Nov 23 05:03:41 localhost podman[320294]: 2025-11-23 10:03:41.31565227 +0000 UTC m=+0.089822311 container create 3634921c8fba17aebe65a7e8426022e19f0c75ff35cfd78ec8b0dbeeae448f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS) Nov 23 05:03:41 localhost systemd[1]: Started libpod-conmon-3634921c8fba17aebe65a7e8426022e19f0c75ff35cfd78ec8b0dbeeae448f2c.scope. Nov 23 05:03:41 localhost systemd[1]: Started libcrun container. Nov 23 05:03:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdbced11d7f92698ab8efa82db4215cc5ab2bbd270474d46ce75dcbbc29a8a29/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:03:41 localhost podman[320294]: 2025-11-23 10:03:41.273965645 +0000 UTC m=+0.048135736 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:03:41 localhost podman[320294]: 2025-11-23 10:03:41.382942395 +0000 UTC m=+0.157112436 container init 3634921c8fba17aebe65a7e8426022e19f0c75ff35cfd78ec8b0dbeeae448f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:03:41 localhost podman[320294]: 2025-11-23 10:03:41.391707425 +0000 UTC m=+0.165877466 container start 3634921c8fba17aebe65a7e8426022e19f0c75ff35cfd78ec8b0dbeeae448f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:03:41 localhost dnsmasq[320312]: started, version 2.85 cachesize 150 Nov 23 05:03:41 localhost dnsmasq[320312]: DNS service limited to local subnets Nov 23 05:03:41 localhost dnsmasq[320312]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:03:41 localhost dnsmasq[320312]: warning: no upstream servers configured Nov 23 05:03:41 localhost dnsmasq-dhcp[320312]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 23 05:03:41 localhost dnsmasq[320312]: read /var/lib/neutron/dhcp/3791c415-ea14-4f6d-8194-9906e7c3c59b/addn_hosts - 0 addresses Nov 23 05:03:41 localhost dnsmasq-dhcp[320312]: read /var/lib/neutron/dhcp/3791c415-ea14-4f6d-8194-9906e7c3c59b/host Nov 23 05:03:41 localhost dnsmasq-dhcp[320312]: read /var/lib/neutron/dhcp/3791c415-ea14-4f6d-8194-9906e7c3c59b/opts Nov 23 05:03:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v322: 177 pgs: 177 active+clean; 192 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 177 KiB/s rd, 25 KiB/s wr, 245 op/s Nov 23 05:03:41 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:41.569 262301 INFO neutron.agent.dhcp.agent [None req-70c3fb32-d28e-471c-9004-a3780af35401 - - - - - -] DHCP configuration for ports {'bcfecdd0-d130-4bf3-b41f-d1e271870c7f', '5e6a7360-a279-4af1-ba70-a75b28e78072'} is completed#033[00m Nov 23 05:03:41 localhost ovn_controller[153771]: 2025-11-23T10:03:41Z|00275|binding|INFO|Removing iface tapbcfecdd0-d1 ovn-installed in OVS Nov 23 05:03:41 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:41.619 159415 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port b232dbe7-325e-4796-a408-3cc77ecba756 with type ""#033[00m Nov 23 05:03:41 localhost ovn_controller[153771]: 2025-11-23T10:03:41Z|00276|binding|INFO|Removing lport bcfecdd0-d130-4bf3-b41f-d1e271870c7f ovn-installed in OVS Nov 23 05:03:41 localhost nova_compute[280939]: 2025-11-23 10:03:41.621 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:41 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:41.622 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-3791c415-ea14-4f6d-8194-9906e7c3c59b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3791c415-ea14-4f6d-8194-9906e7c3c59b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6cc558ab5ea444ca89055d39fcd5b762', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2de80a98-28a3-4797-b9f7-d34b6bb2bf95, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=bcfecdd0-d130-4bf3-b41f-d1e271870c7f) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:03:41 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:41.624 159415 INFO neutron.agent.ovn.metadata.agent [-] Port bcfecdd0-d130-4bf3-b41f-d1e271870c7f in datapath 3791c415-ea14-4f6d-8194-9906e7c3c59b unbound from our chassis#033[00m Nov 23 05:03:41 localhost nova_compute[280939]: 2025-11-23 10:03:41.626 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:41 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:41.629 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3791c415-ea14-4f6d-8194-9906e7c3c59b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:03:41 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:41.630 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c6c50f-2f22-40bd-83e2-776ffaf2ab60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:03:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e157 do_prune osdmap full prune enabled Nov 23 05:03:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e158 e158: 6 total, 6 up, 6 in Nov 23 05:03:41 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e158: 6 total, 6 up, 6 in Nov 23 05:03:41 localhost podman[320329]: 2025-11-23 10:03:41.693630464 +0000 UTC m=+0.046655529 container kill 3634921c8fba17aebe65a7e8426022e19f0c75ff35cfd78ec8b0dbeeae448f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 23 05:03:41 localhost dnsmasq[320312]: exiting on receipt of SIGTERM Nov 23 05:03:41 localhost systemd[1]: libpod-3634921c8fba17aebe65a7e8426022e19f0c75ff35cfd78ec8b0dbeeae448f2c.scope: Deactivated successfully. Nov 23 05:03:41 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:41.708 2 INFO neutron.agent.securitygroups_rpc [None req-99b4e496-02de-4948-b003-eb8832f49bd1 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m Nov 23 05:03:41 localhost podman[320343]: 2025-11-23 10:03:41.756916266 +0000 UTC m=+0.046286858 container died 3634921c8fba17aebe65a7e8426022e19f0c75ff35cfd78ec8b0dbeeae448f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 23 05:03:41 localhost podman[320343]: 2025-11-23 10:03:41.785863648 +0000 UTC m=+0.075234200 container cleanup 3634921c8fba17aebe65a7e8426022e19f0c75ff35cfd78ec8b0dbeeae448f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:03:41 localhost systemd[1]: libpod-conmon-3634921c8fba17aebe65a7e8426022e19f0c75ff35cfd78ec8b0dbeeae448f2c.scope: Deactivated successfully. Nov 23 05:03:41 localhost podman[320344]: 2025-11-23 10:03:41.832530168 +0000 UTC m=+0.117826935 container remove 3634921c8fba17aebe65a7e8426022e19f0c75ff35cfd78ec8b0dbeeae448f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3791c415-ea14-4f6d-8194-9906e7c3c59b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 05:03:41 localhost nova_compute[280939]: 2025-11-23 10:03:41.845 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:41 localhost kernel: device tapbcfecdd0-d1 left promiscuous mode Nov 23 05:03:41 localhost nova_compute[280939]: 2025-11-23 10:03:41.865 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:41 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:41.889 262301 INFO neutron.agent.dhcp.agent [None req-94cb1c5c-bc84-4586-bd36-7e65f580ba28 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:41 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:41.910 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:41 localhost dnsmasq[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/addn_hosts - 0 addresses Nov 23 05:03:41 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/host Nov 23 05:03:41 localhost dnsmasq-dhcp[319336]: read /var/lib/neutron/dhcp/53373f39-43fa-4ec3-9d41-940ffd04ef81/opts Nov 23 05:03:41 localhost podman[320389]: 2025-11-23 10:03:41.945235373 +0000 UTC m=+0.060597019 container kill afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Nov 23 05:03:42 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:42.180 2 INFO neutron.agent.securitygroups_rpc [None req-dbf580d8-52e8-4ecb-9364-932e14668854 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Security group rule updated ['d77fc436-3ab1-42e0-a52b-861d18fcc237']#033[00m Nov 23 05:03:42 localhost nova_compute[280939]: 2025-11-23 10:03:42.331 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:42 localhost systemd[1]: tmp-crun.Q7mNdr.mount: Deactivated successfully. Nov 23 05:03:42 localhost systemd[1]: var-lib-containers-storage-overlay-cdbced11d7f92698ab8efa82db4215cc5ab2bbd270474d46ce75dcbbc29a8a29-merged.mount: Deactivated successfully. Nov 23 05:03:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3634921c8fba17aebe65a7e8426022e19f0c75ff35cfd78ec8b0dbeeae448f2c-userdata-shm.mount: Deactivated successfully. Nov 23 05:03:42 localhost systemd[1]: run-netns-qdhcp\x2d3791c415\x2dea14\x2d4f6d\x2d8194\x2d9906e7c3c59b.mount: Deactivated successfully. Nov 23 05:03:42 localhost nova_compute[280939]: 2025-11-23 10:03:42.526 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:42 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e46: np0005532584.naxwxy(active, since 8m), standbys: np0005532585.gzafiw, np0005532586.thmvqb Nov 23 05:03:42 localhost dnsmasq[319336]: exiting on receipt of SIGTERM Nov 23 05:03:42 localhost podman[320428]: 2025-11-23 10:03:42.744083654 +0000 UTC m=+0.059268178 container kill afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:03:42 localhost systemd[1]: libpod-afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f.scope: Deactivated successfully. Nov 23 05:03:42 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:42.746 2 INFO neutron.agent.securitygroups_rpc [None req-d8c6f897-4254-46a6-9c9c-683b3a672b23 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Security group rule updated ['d77fc436-3ab1-42e0-a52b-861d18fcc237']#033[00m Nov 23 05:03:42 localhost podman[320442]: 2025-11-23 10:03:42.806480649 +0000 UTC m=+0.048898500 container died afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 05:03:42 localhost podman[320442]: 2025-11-23 10:03:42.841076625 +0000 UTC m=+0.083494456 container cleanup afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 05:03:42 localhost systemd[1]: libpod-conmon-afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f.scope: Deactivated successfully. Nov 23 05:03:42 localhost podman[320444]: 2025-11-23 10:03:42.890952163 +0000 UTC m=+0.125373297 container remove afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-53373f39-43fa-4ec3-9d41-940ffd04ef81, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:03:42 localhost nova_compute[280939]: 2025-11-23 10:03:42.902 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:42 localhost ovn_controller[153771]: 2025-11-23T10:03:42Z|00277|binding|INFO|Releasing lport b54f04b5-c09a-4bc6-b5cd-b239662f2c79 from this chassis (sb_readonly=0) Nov 23 05:03:42 localhost kernel: device tapb54f04b5-c0 left promiscuous mode Nov 23 05:03:42 localhost ovn_controller[153771]: 2025-11-23T10:03:42Z|00278|binding|INFO|Setting lport b54f04b5-c09a-4bc6-b5cd-b239662f2c79 down in Southbound Nov 23 05:03:42 localhost nova_compute[280939]: 2025-11-23 10:03:42.921 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:03:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e158 do_prune osdmap full prune enabled Nov 23 05:03:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e159 e159: 6 total, 6 up, 6 in Nov 23 05:03:42 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e159: 6 total, 6 up, 6 in Nov 23 05:03:43 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:43.103 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-53373f39-43fa-4ec3-9d41-940ffd04ef81', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53373f39-43fa-4ec3-9d41-940ffd04ef81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fc3fa728d6f4403acd9944d81eaeb18', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee133ce0-4c65-44e2-b472-fe125a148b66, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b54f04b5-c09a-4bc6-b5cd-b239662f2c79) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:03:43 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:43.105 159415 INFO neutron.agent.ovn.metadata.agent [-] Port b54f04b5-c09a-4bc6-b5cd-b239662f2c79 in datapath 53373f39-43fa-4ec3-9d41-940ffd04ef81 unbound from our chassis#033[00m Nov 23 05:03:43 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:43.107 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 53373f39-43fa-4ec3-9d41-940ffd04ef81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:03:43 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:43.108 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[a14fbd0d-c32f-45b6-96be-ba31471664b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:03:43 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:43.339 262301 INFO neutron.agent.dhcp.agent [None req-a17135c3-095b-4926-8111-f4f4c53a9b33 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:43 localhost systemd[1]: var-lib-containers-storage-overlay-94ff772cbccff60aea1f356dd884120f556223dbabdfe4f03b6976fd2d85ad64-merged.mount: Deactivated successfully. Nov 23 05:03:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-afc74840a72066e4dc42f79a8425d45889d91869080448756d20c230d1f4d79f-userdata-shm.mount: Deactivated successfully. Nov 23 05:03:43 localhost systemd[1]: run-netns-qdhcp\x2d53373f39\x2d43fa\x2d4ec3\x2d9d41\x2d940ffd04ef81.mount: Deactivated successfully. Nov 23 05:03:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v325: 177 pgs: 177 active+clean; 192 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 170 KiB/s rd, 24 KiB/s wr, 235 op/s Nov 23 05:03:43 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 05:03:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 05:03:43 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:03:43 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:43.549 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:43 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:03:44 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:44.239 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:44 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:44.284 2 INFO neutron.agent.securitygroups_rpc [None req-71dcb7fc-4cba-40db-b8c8-6bad4f6af9d0 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:03:44 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:44.506 2 INFO neutron.agent.securitygroups_rpc [None req-71dcb7fc-4cba-40db-b8c8-6bad4f6af9d0 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:03:44 localhost nova_compute[280939]: 2025-11-23 10:03:44.580 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:03:44 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1260994540' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:03:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:03:44 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1260994540' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:03:45 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:45.123 2 INFO neutron.agent.securitygroups_rpc [None req-cef19edc-dbb5-4bcd-8945-0c2ead165d91 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:03:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v326: 177 pgs: 177 active+clean; 145 MiB data, 877 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 11 KiB/s wr, 73 op/s Nov 23 05:03:45 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:45.629 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:03:45 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:45.630 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 05:03:45 localhost nova_compute[280939]: 2025-11-23 10:03:45.655 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:46 localhost nova_compute[280939]: 2025-11-23 10:03:46.155 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:46 localhost dnsmasq[318348]: exiting on receipt of SIGTERM Nov 23 05:03:46 localhost podman[320491]: 2025-11-23 10:03:46.186773766 +0000 UTC m=+0.056309487 container kill 0247714e25a110104f74b95eb95b9d1b43b3643d30059313b2125934661dc0f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-46c3f560-7f49-47ef-94b9-78baf8efb062, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:03:46 localhost systemd[1]: libpod-0247714e25a110104f74b95eb95b9d1b43b3643d30059313b2125934661dc0f3.scope: Deactivated successfully. Nov 23 05:03:46 localhost podman[320506]: 2025-11-23 10:03:46.251809222 +0000 UTC m=+0.053192121 container died 0247714e25a110104f74b95eb95b9d1b43b3643d30059313b2125934661dc0f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-46c3f560-7f49-47ef-94b9-78baf8efb062, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 05:03:46 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:46.275 2 INFO neutron.agent.securitygroups_rpc [None req-b1ac9adf-2928-4e8c-b93d-9a7afa468620 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:03:46 localhost podman[320506]: 2025-11-23 10:03:46.283290863 +0000 UTC m=+0.084673732 container cleanup 0247714e25a110104f74b95eb95b9d1b43b3643d30059313b2125934661dc0f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-46c3f560-7f49-47ef-94b9-78baf8efb062, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0) Nov 23 05:03:46 localhost systemd[1]: libpod-conmon-0247714e25a110104f74b95eb95b9d1b43b3643d30059313b2125934661dc0f3.scope: Deactivated successfully. Nov 23 05:03:46 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:46.313 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:46 localhost podman[320507]: 2025-11-23 10:03:46.336565905 +0000 UTC m=+0.128313657 container remove 0247714e25a110104f74b95eb95b9d1b43b3643d30059313b2125934661dc0f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-46c3f560-7f49-47ef-94b9-78baf8efb062, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3) Nov 23 05:03:46 localhost ovn_controller[153771]: 2025-11-23T10:03:46Z|00279|binding|INFO|Releasing lport 3b53fd96-8722-4b83-a91d-d3918b7d0baa from this chassis (sb_readonly=0) Nov 23 05:03:46 localhost nova_compute[280939]: 2025-11-23 10:03:46.349 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:46 localhost ovn_controller[153771]: 2025-11-23T10:03:46Z|00280|binding|INFO|Setting lport 3b53fd96-8722-4b83-a91d-d3918b7d0baa down in Southbound Nov 23 05:03:46 localhost kernel: device tap3b53fd96-87 left promiscuous mode Nov 23 05:03:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:46.363 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-46c3f560-7f49-47ef-94b9-78baf8efb062', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-46c3f560-7f49-47ef-94b9-78baf8efb062', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6cc558ab5ea444ca89055d39fcd5b762', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfdc050c-6921-47f3-9dfe-6bafcffaf199, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3b53fd96-8722-4b83-a91d-d3918b7d0baa) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:03:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:46.365 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 3b53fd96-8722-4b83-a91d-d3918b7d0baa in datapath 46c3f560-7f49-47ef-94b9-78baf8efb062 unbound from our chassis#033[00m Nov 23 05:03:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:46.368 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 46c3f560-7f49-47ef-94b9-78baf8efb062 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:03:46 localhost nova_compute[280939]: 2025-11-23 10:03:46.369 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:46.369 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[1a780e6d-83b5-47b9-bbf3-4457bcb9b439]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:03:46 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:46.397 262301 INFO neutron.agent.dhcp.agent [None req-dc0dba50-35e5-4a0e-be2f-0eae1128c475 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:46 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:03:46.998 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:03:47 localhost podman[239764]: time="2025-11-23T10:03:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:03:47 localhost podman[239764]: @ - - [23/Nov/2025:10:03:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:03:47 localhost podman[239764]: @ - - [23/Nov/2025:10:03:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18732 "" "Go-http-client/1.1" Nov 23 05:03:47 localhost systemd[1]: var-lib-containers-storage-overlay-1fa59b8bb6672052af1f8e26c62183df1ba38beb8a371994b2085065be321eb2-merged.mount: Deactivated successfully. Nov 23 05:03:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0247714e25a110104f74b95eb95b9d1b43b3643d30059313b2125934661dc0f3-userdata-shm.mount: Deactivated successfully. Nov 23 05:03:47 localhost systemd[1]: run-netns-qdhcp\x2d46c3f560\x2d7f49\x2d47ef\x2d94b9\x2d78baf8efb062.mount: Deactivated successfully. Nov 23 05:03:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v327: 177 pgs: 177 active+clean; 145 MiB data, 877 MiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 9.3 KiB/s wr, 59 op/s Nov 23 05:03:47 localhost nova_compute[280939]: 2025-11-23 10:03:47.536 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:03:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e159 do_prune osdmap full prune enabled Nov 23 05:03:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e160 e160: 6 total, 6 up, 6 in Nov 23 05:03:48 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e160: 6 total, 6 up, 6 in Nov 23 05:03:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v329: 177 pgs: 177 active+clean; 146 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 11 KiB/s wr, 57 op/s Nov 23 05:03:50 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:50.223 2 INFO neutron.agent.securitygroups_rpc [req-dc022d26-398e-4427-8b9e-d6e32e3174fc req-12998a72-36e8-4adc-96fc-04c6618198f0 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Security group member updated ['d77fc436-3ab1-42e0-a52b-861d18fcc237']#033[00m Nov 23 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:03:50 localhost podman[320535]: 2025-11-23 10:03:50.89784812 +0000 UTC m=+0.083029292 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2) Nov 23 05:03:50 localhost podman[320535]: 2025-11-23 10:03:50.906323051 +0000 UTC m=+0.091504213 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true) Nov 23 05:03:50 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:03:51 localhost nova_compute[280939]: 2025-11-23 10:03:51.206 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v330: 177 pgs: 177 active+clean; 146 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 10 KiB/s wr, 52 op/s Nov 23 05:03:52 localhost nova_compute[280939]: 2025-11-23 10:03:52.563 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:52 localhost ovn_metadata_agent[159410]: 2025-11-23 10:03:52.632 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 05:03:52 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:03:52 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:52.988 2 INFO neutron.agent.securitygroups_rpc [None req-e884606d-3955-464b-8443-536f305941fb 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:03:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:03:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:03:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:03:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:03:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:03:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:03:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v331: 177 pgs: 177 active+clean; 146 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 8.6 KiB/s wr, 44 op/s Nov 23 05:03:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e160 do_prune osdmap full prune enabled Nov 23 05:03:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e161 e161: 6 total, 6 up, 6 in Nov 23 05:03:54 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e161: 6 total, 6 up, 6 in Nov 23 05:03:54 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:54.189 2 INFO neutron.agent.securitygroups_rpc [None req-5a886eea-af54-4cb9-a980-5c3836eff3f1 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0. Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:03:55.068757) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55 Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892235068811, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1345, "num_deletes": 262, "total_data_size": 1853227, "memory_usage": 2006640, "flush_reason": "Manual Compaction"} Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892235078674, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 1826635, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29509, "largest_seqno": 30853, "table_properties": {"data_size": 1820430, "index_size": 3419, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14676, "raw_average_key_size": 21, "raw_value_size": 1807542, "raw_average_value_size": 2669, "num_data_blocks": 148, "num_entries": 677, "num_filter_entries": 677, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892168, "oldest_key_time": 1763892168, "file_creation_time": 1763892235, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}} Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 9958 microseconds, and 5085 cpu microseconds. Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:03:55.078719) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 1826635 bytes OK Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:03:55.078741) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:03:55.080598) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:03:55.080617) EVENT_LOG_v1 {"time_micros": 1763892235080611, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:03:55.080641) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 1846898, prev total WAL file size 1846898, number of live WAL files 2. Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:03:55.081365) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end) Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(1783KB)], [54(14MB)] Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892235081422, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 16754485, "oldest_snapshot_seqno": -1} Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 12590 keys, 15502435 bytes, temperature: kUnknown Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892235156238, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 15502435, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15433582, "index_size": 36304, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31493, "raw_key_size": 340971, "raw_average_key_size": 27, "raw_value_size": 15221716, "raw_average_value_size": 1209, "num_data_blocks": 1345, "num_entries": 12590, "num_filter_entries": 12590, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763892235, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}} Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:03:55.156564) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 15502435 bytes Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:03:55.158378) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.7 rd, 207.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 14.2 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(17.7) write-amplify(8.5) OK, records in: 13129, records dropped: 539 output_compression: NoCompression Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:03:55.158406) EVENT_LOG_v1 {"time_micros": 1763892235158394, "job": 32, "event": "compaction_finished", "compaction_time_micros": 74897, "compaction_time_cpu_micros": 43837, "output_level": 6, "num_output_files": 1, "total_output_size": 15502435, "num_input_records": 13129, "num_output_records": 12590, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892235158827, "job": 32, "event": "table_file_deletion", "file_number": 56} Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892235160813, "job": 32, "event": "table_file_deletion", "file_number": 54} Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:03:55.081223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:03:55.160905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:03:55.160911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:03:55.160914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:03:55.160917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:03:55 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:03:55.160920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:03:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v333: 177 pgs: 177 active+clean; 192 MiB data, 953 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 2.7 MiB/s wr, 52 op/s Nov 23 05:03:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e161 do_prune osdmap full prune enabled Nov 23 05:03:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e162 e162: 6 total, 6 up, 6 in Nov 23 05:03:56 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e162: 6 total, 6 up, 6 in Nov 23 05:03:56 localhost nova_compute[280939]: 2025-11-23 10:03:56.244 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:03:57 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1518047932' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:03:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:03:57 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1518047932' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:03:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v335: 177 pgs: 177 active+clean; 192 MiB data, 953 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 2.7 MiB/s wr, 52 op/s Nov 23 05:03:57 localhost nova_compute[280939]: 2025-11-23 10:03:57.602 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:03:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:03:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:03:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:03:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:03:58 localhost podman[320552]: 2025-11-23 10:03:58.905095197 +0000 UTC m=+0.084068443 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 23 05:03:58 localhost systemd[1]: tmp-crun.VIf8wH.mount: Deactivated successfully. Nov 23 05:03:58 localhost podman[320553]: 2025-11-23 10:03:58.96548485 +0000 UTC m=+0.140819963 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 05:03:58 localhost podman[320552]: 2025-11-23 10:03:58.991351107 +0000 UTC m=+0.170324393 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true) Nov 23 05:03:59 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:03:59 localhost podman[320553]: 2025-11-23 10:03:59.00538504 +0000 UTC m=+0.180720123 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 05:03:59 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:03:59 localhost podman[320554]: 2025-11-23 10:03:59.067405282 +0000 UTC m=+0.238643639 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller) Nov 23 05:03:59 localhost neutron_sriov_agent[255165]: 2025-11-23 10:03:59.102 2 INFO neutron.agent.securitygroups_rpc [None req-73f9f53a-edf4-45e5-a635-4120a726bffe f436a64c9a134831a0f528309f399f1d 807b835f4cc944269d2f71f8e519b08a - - default default] Security group member updated ['c2582f3e-b285-4f13-ba8c-38a0c5b47d8d']#033[00m Nov 23 05:03:59 localhost podman[320554]: 2025-11-23 10:03:59.133463499 +0000 UTC m=+0.304701856 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 23 05:03:59 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:03:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v336: 177 pgs: 177 active+clean; 192 MiB data, 957 MiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 176 op/s Nov 23 05:04:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:00.109 262301 INFO neutron.agent.linux.ip_lib [None req-3d1d42b4-dde7-48d1-8006-9a02c9b59ae1 - - - - - -] Device tap82dd05be-44 cannot be used as it has no MAC address#033[00m Nov 23 05:04:00 localhost nova_compute[280939]: 2025-11-23 10:04:00.163 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:00 localhost kernel: device tap82dd05be-44 entered promiscuous mode Nov 23 05:04:00 localhost ovn_controller[153771]: 2025-11-23T10:04:00Z|00281|binding|INFO|Claiming lport 82dd05be-449f-4799-abd8-517dd4f37132 for this chassis. Nov 23 05:04:00 localhost ovn_controller[153771]: 2025-11-23T10:04:00Z|00282|binding|INFO|82dd05be-449f-4799-abd8-517dd4f37132: Claiming unknown Nov 23 05:04:00 localhost NetworkManager[5966]: [1763892240.1731] manager: (tap82dd05be-44): new Generic device (/org/freedesktop/NetworkManager/Devices/47) Nov 23 05:04:00 localhost nova_compute[280939]: 2025-11-23 10:04:00.174 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:00 localhost systemd-udevd[320629]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:04:00 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:00.190 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-1880dbef-6576-46c9-81a0-ca3be117e6cf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1880dbef-6576-46c9-81a0-ca3be117e6cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b87ccbdd-4b16-4455-9662-c93c41d0c58f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=82dd05be-449f-4799-abd8-517dd4f37132) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:04:00 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:00.192 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 82dd05be-449f-4799-abd8-517dd4f37132 in datapath 1880dbef-6576-46c9-81a0-ca3be117e6cf bound to our chassis#033[00m Nov 23 05:04:00 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:00.194 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1880dbef-6576-46c9-81a0-ca3be117e6cf or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:04:00 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:00.195 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd30d2c-fbd5-4cc4-a8f9-d490880bae4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:04:00 localhost ovn_controller[153771]: 2025-11-23T10:04:00Z|00283|binding|INFO|Setting lport 82dd05be-449f-4799-abd8-517dd4f37132 ovn-installed in OVS Nov 23 05:04:00 localhost ovn_controller[153771]: 2025-11-23T10:04:00Z|00284|binding|INFO|Setting lport 82dd05be-449f-4799-abd8-517dd4f37132 up in Southbound Nov 23 05:04:00 localhost nova_compute[280939]: 2025-11-23 10:04:00.220 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:00 localhost nova_compute[280939]: 2025-11-23 10:04:00.223 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:00 localhost nova_compute[280939]: 2025-11-23 10:04:00.253 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:00 localhost nova_compute[280939]: 2025-11-23 10:04:00.282 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:01 localhost nova_compute[280939]: 2025-11-23 10:04:01.287 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:01 localhost podman[320684]: Nov 23 05:04:01 localhost podman[320684]: 2025-11-23 10:04:01.431171227 +0000 UTC m=+0.091837753 container create 610c27dce3ccb089ecefb040844f5548f1c8477f51ceb35f53d85d38ad874705 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1880dbef-6576-46c9-81a0-ca3be117e6cf, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3) Nov 23 05:04:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v337: 177 pgs: 177 active+clean; 192 MiB data, 957 MiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 176 op/s Nov 23 05:04:01 localhost systemd[1]: Started libpod-conmon-610c27dce3ccb089ecefb040844f5548f1c8477f51ceb35f53d85d38ad874705.scope. Nov 23 05:04:01 localhost systemd[1]: tmp-crun.WI3uKS.mount: Deactivated successfully. Nov 23 05:04:01 localhost podman[320684]: 2025-11-23 10:04:01.387954815 +0000 UTC m=+0.048621371 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:04:01 localhost systemd[1]: Started libcrun container. Nov 23 05:04:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59191ea4693e8762b59611134537597c6f540f3b44d34cc05bf508d0bc4bb68d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:04:01 localhost podman[320684]: 2025-11-23 10:04:01.505785748 +0000 UTC m=+0.166452274 container init 610c27dce3ccb089ecefb040844f5548f1c8477f51ceb35f53d85d38ad874705 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1880dbef-6576-46c9-81a0-ca3be117e6cf, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Nov 23 05:04:01 localhost podman[320684]: 2025-11-23 10:04:01.514581529 +0000 UTC m=+0.175248055 container start 610c27dce3ccb089ecefb040844f5548f1c8477f51ceb35f53d85d38ad874705 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1880dbef-6576-46c9-81a0-ca3be117e6cf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 05:04:01 localhost dnsmasq[320702]: started, version 2.85 cachesize 150 Nov 23 05:04:01 localhost dnsmasq[320702]: DNS service limited to local subnets Nov 23 05:04:01 localhost dnsmasq[320702]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:04:01 localhost dnsmasq[320702]: warning: no upstream servers configured Nov 23 05:04:01 localhost dnsmasq-dhcp[320702]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:04:01 localhost dnsmasq[320702]: read /var/lib/neutron/dhcp/1880dbef-6576-46c9-81a0-ca3be117e6cf/addn_hosts - 0 addresses Nov 23 05:04:01 localhost dnsmasq-dhcp[320702]: read /var/lib/neutron/dhcp/1880dbef-6576-46c9-81a0-ca3be117e6cf/host Nov 23 05:04:01 localhost dnsmasq-dhcp[320702]: read /var/lib/neutron/dhcp/1880dbef-6576-46c9-81a0-ca3be117e6cf/opts Nov 23 05:04:01 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:01.645 262301 INFO neutron.agent.dhcp.agent [None req-d5313f11-0b4d-4ad7-b171-b2f8fffda9f1 - - - - - -] DHCP configuration for ports {'d4bff480-370a-4cdc-87d2-bec890ce6dce'} is completed#033[00m Nov 23 05:04:02 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:02.099 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:04:01Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9536eba8-2f8d-4d39-9134-3fac44d11ed2, ip_allocation=immediate, mac_address=fa:16:3e:23:86:b2, name=tempest-PortsTestJSON-1464311609, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:57Z, description=, dns_domain=, id=1880dbef-6576-46c9-81a0-ca3be117e6cf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1046197678, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=57352, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2414, status=ACTIVE, subnets=['958b2c59-2e28-46eb-974f-449e04baf28a'], tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:03:58Z, vlan_transparent=None, network_id=1880dbef-6576-46c9-81a0-ca3be117e6cf, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2448, status=DOWN, tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:04:01Z on network 1880dbef-6576-46c9-81a0-ca3be117e6cf#033[00m Nov 23 05:04:02 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:02.268 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:02 localhost dnsmasq[320702]: read /var/lib/neutron/dhcp/1880dbef-6576-46c9-81a0-ca3be117e6cf/addn_hosts - 1 addresses Nov 23 05:04:02 localhost dnsmasq-dhcp[320702]: read /var/lib/neutron/dhcp/1880dbef-6576-46c9-81a0-ca3be117e6cf/host Nov 23 05:04:02 localhost dnsmasq-dhcp[320702]: read /var/lib/neutron/dhcp/1880dbef-6576-46c9-81a0-ca3be117e6cf/opts Nov 23 05:04:02 localhost podman[320719]: 2025-11-23 10:04:02.338142533 +0000 UTC m=+0.055421850 container kill 610c27dce3ccb089ecefb040844f5548f1c8477f51ceb35f53d85d38ad874705 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1880dbef-6576-46c9-81a0-ca3be117e6cf, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:04:02 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:02.621 262301 INFO neutron.agent.dhcp.agent [None req-1761b233-3e13-4f5d-8122-9a80576c4df1 - - - - - -] DHCP configuration for ports {'9536eba8-2f8d-4d39-9134-3fac44d11ed2'} is completed#033[00m Nov 23 05:04:02 localhost nova_compute[280939]: 2025-11-23 10:04:02.637 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:02 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:04:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e162 do_prune osdmap full prune enabled Nov 23 05:04:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e163 e163: 6 total, 6 up, 6 in Nov 23 05:04:03 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e163: 6 total, 6 up, 6 in Nov 23 05:04:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v339: 177 pgs: 177 active+clean; 192 MiB data, 957 MiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 23 KiB/s wr, 124 op/s Nov 23 05:04:03 localhost dnsmasq[320702]: read /var/lib/neutron/dhcp/1880dbef-6576-46c9-81a0-ca3be117e6cf/addn_hosts - 0 addresses Nov 23 05:04:03 localhost dnsmasq-dhcp[320702]: read /var/lib/neutron/dhcp/1880dbef-6576-46c9-81a0-ca3be117e6cf/host Nov 23 05:04:03 localhost dnsmasq-dhcp[320702]: read /var/lib/neutron/dhcp/1880dbef-6576-46c9-81a0-ca3be117e6cf/opts Nov 23 05:04:03 localhost podman[320755]: 2025-11-23 10:04:03.611458524 +0000 UTC m=+0.062489137 container kill 610c27dce3ccb089ecefb040844f5548f1c8477f51ceb35f53d85d38ad874705 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1880dbef-6576-46c9-81a0-ca3be117e6cf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118) Nov 23 05:04:03 localhost systemd[1]: tmp-crun.fTQFsL.mount: Deactivated successfully. Nov 23 05:04:05 localhost dnsmasq[320702]: exiting on receipt of SIGTERM Nov 23 05:04:05 localhost systemd[1]: libpod-610c27dce3ccb089ecefb040844f5548f1c8477f51ceb35f53d85d38ad874705.scope: Deactivated successfully. Nov 23 05:04:05 localhost podman[320792]: 2025-11-23 10:04:05.342705796 +0000 UTC m=+0.055618875 container kill 610c27dce3ccb089ecefb040844f5548f1c8477f51ceb35f53d85d38ad874705 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1880dbef-6576-46c9-81a0-ca3be117e6cf, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 05:04:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:04:05 localhost podman[320807]: 2025-11-23 10:04:05.420459604 +0000 UTC m=+0.054707857 container died 610c27dce3ccb089ecefb040844f5548f1c8477f51ceb35f53d85d38ad874705 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1880dbef-6576-46c9-81a0-ca3be117e6cf, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:04:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-610c27dce3ccb089ecefb040844f5548f1c8477f51ceb35f53d85d38ad874705-userdata-shm.mount: Deactivated successfully. Nov 23 05:04:05 localhost systemd[1]: var-lib-containers-storage-overlay-59191ea4693e8762b59611134537597c6f540f3b44d34cc05bf508d0bc4bb68d-merged.mount: Deactivated successfully. Nov 23 05:04:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v340: 177 pgs: 177 active+clean; 192 MiB data, 941 MiB used, 41 GiB / 42 GiB avail; 2.5 MiB/s rd, 20 KiB/s wr, 123 op/s Nov 23 05:04:05 localhost podman[320807]: 2025-11-23 10:04:05.51759267 +0000 UTC m=+0.151840943 container remove 610c27dce3ccb089ecefb040844f5548f1c8477f51ceb35f53d85d38ad874705 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1880dbef-6576-46c9-81a0-ca3be117e6cf, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Nov 23 05:04:05 localhost systemd[1]: libpod-conmon-610c27dce3ccb089ecefb040844f5548f1c8477f51ceb35f53d85d38ad874705.scope: Deactivated successfully. Nov 23 05:04:05 localhost ovn_controller[153771]: 2025-11-23T10:04:05Z|00285|binding|INFO|Releasing lport 82dd05be-449f-4799-abd8-517dd4f37132 from this chassis (sb_readonly=0) Nov 23 05:04:05 localhost ovn_controller[153771]: 2025-11-23T10:04:05Z|00286|binding|INFO|Setting lport 82dd05be-449f-4799-abd8-517dd4f37132 down in Southbound Nov 23 05:04:05 localhost kernel: device tap82dd05be-44 left promiscuous mode Nov 23 05:04:05 localhost nova_compute[280939]: 2025-11-23 10:04:05.537 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:05 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:05.545 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-1880dbef-6576-46c9-81a0-ca3be117e6cf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1880dbef-6576-46c9-81a0-ca3be117e6cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b87ccbdd-4b16-4455-9662-c93c41d0c58f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=82dd05be-449f-4799-abd8-517dd4f37132) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:04:05 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:05.547 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 82dd05be-449f-4799-abd8-517dd4f37132 in datapath 1880dbef-6576-46c9-81a0-ca3be117e6cf unbound from our chassis#033[00m Nov 23 05:04:05 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:05.549 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1880dbef-6576-46c9-81a0-ca3be117e6cf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:04:05 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:05.550 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0f411b-0474-434a-ac25-c9c7f157eb2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:04:05 localhost nova_compute[280939]: 2025-11-23 10:04:05.562 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:05 localhost podman[320813]: 2025-11-23 10:04:05.563979809 +0000 UTC m=+0.192521726 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc.) Nov 23 05:04:05 localhost podman[320813]: 2025-11-23 10:04:05.578388854 +0000 UTC m=+0.206930821 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, distribution-scope=public, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, io.openshift.tags=minimal rhel9) Nov 23 05:04:05 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:04:05 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:05.885 262301 INFO neutron.agent.dhcp.agent [None req-2b302958-4b3b-4ea4-892f-289bdbdaeef7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:05 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:05.986 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:06 localhost nova_compute[280939]: 2025-11-23 10:04:06.318 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:06 localhost systemd[1]: run-netns-qdhcp\x2d1880dbef\x2d6576\x2d46c9\x2d81a0\x2dca3be117e6cf.mount: Deactivated successfully. Nov 23 05:04:06 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:06.376 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:06 localhost openstack_network_exporter[241732]: ERROR 10:04:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:04:06 localhost openstack_network_exporter[241732]: ERROR 10:04:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:04:06 localhost openstack_network_exporter[241732]: ERROR 10:04:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:04:06 localhost openstack_network_exporter[241732]: ERROR 10:04:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:04:06 localhost openstack_network_exporter[241732]: Nov 23 05:04:06 localhost openstack_network_exporter[241732]: ERROR 10:04:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:04:06 localhost openstack_network_exporter[241732]: Nov 23 05:04:07 localhost nova_compute[280939]: 2025-11-23 10:04:07.407 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v341: 177 pgs: 177 active+clean; 192 MiB data, 941 MiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 115 op/s Nov 23 05:04:07 localhost nova_compute[280939]: 2025-11-23 10:04:07.640 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:07 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:07.665 2 INFO neutron.agent.securitygroups_rpc [None req-cd03e682-7688-4e93-ac2d-e601f5fc3971 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:04:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:04:08 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:08.165 2 INFO neutron.agent.securitygroups_rpc [None req-3d60f928-a89d-481c-a25d-e6417d1d55cf 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:04:08 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:08.215 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:08 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:08.739 2 INFO neutron.agent.securitygroups_rpc [None req-4c8f3bc2-c43f-4c06-bcd4-666015157129 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:04:08 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:08.955 2 INFO neutron.agent.securitygroups_rpc [None req-fdb6f567-90f0-41d5-acb8-83f08adab1b1 f436a64c9a134831a0f528309f399f1d 807b835f4cc944269d2f71f8e519b08a - - default default] Security group member updated ['c2582f3e-b285-4f13-ba8c-38a0c5b47d8d']#033[00m Nov 23 05:04:09 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:09.336 2 INFO neutron.agent.securitygroups_rpc [None req-b9ca6263-bc29-4379-89bf-449c3fc12e0d 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:04:09 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:09.395 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v342: 177 pgs: 177 active+clean; 213 MiB data, 973 MiB used, 41 GiB / 42 GiB avail; 693 KiB/s rd, 2.4 MiB/s wr, 50 op/s Nov 23 05:04:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:09.745 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:04:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:09.745 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:04:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:09.746 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:04:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:04:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:04:09 localhost podman[320853]: 2025-11-23 10:04:09.897514781 +0000 UTC m=+0.083577478 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:04:09 localhost podman[320853]: 2025-11-23 10:04:09.908471449 +0000 UTC m=+0.094534176 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm) Nov 23 05:04:09 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:04:09 localhost podman[320854]: 2025-11-23 10:04:09.996953058 +0000 UTC m=+0.178769124 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:04:10 localhost podman[320854]: 2025-11-23 10:04:10.005439139 +0000 UTC m=+0.187255205 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 05:04:10 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:04:10 localhost nova_compute[280939]: 2025-11-23 10:04:10.155 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:04:10 localhost nova_compute[280939]: 2025-11-23 10:04:10.155 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:04:10 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:10.168 2 INFO neutron.agent.securitygroups_rpc [None req-aba9c038-400a-4d01-8bf0-588461edf0a1 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:04:11 localhost nova_compute[280939]: 2025-11-23 10:04:11.319 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v343: 177 pgs: 177 active+clean; 213 MiB data, 973 MiB used, 41 GiB / 42 GiB avail; 692 KiB/s rd, 2.4 MiB/s wr, 50 op/s Nov 23 05:04:12 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:12.003 2 INFO neutron.agent.securitygroups_rpc [None req-b1d88626-831d-4bca-895f-9342c26bbcc2 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:04:12 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:12.032 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:12 localhost nova_compute[280939]: 2025-11-23 10:04:12.641 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:04:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v344: 177 pgs: 177 active+clean; 213 MiB data, 973 MiB used, 41 GiB / 42 GiB avail; 663 KiB/s rd, 2.3 MiB/s wr, 48 op/s Nov 23 05:04:14 localhost nova_compute[280939]: 2025-11-23 10:04:14.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:04:14 localhost nova_compute[280939]: 2025-11-23 10:04:14.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 05:04:14 localhost nova_compute[280939]: 2025-11-23 10:04:14.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 05:04:14 localhost nova_compute[280939]: 2025-11-23 10:04:14.156 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 05:04:15 localhost nova_compute[280939]: 2025-11-23 10:04:15.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:04:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v345: 177 pgs: 177 active+clean; 225 MiB data, 1017 MiB used, 41 GiB / 42 GiB avail; 777 KiB/s rd, 2.1 MiB/s wr, 77 op/s Nov 23 05:04:15 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:15.760 262301 INFO neutron.agent.linux.ip_lib [None req-df523faf-265d-4c1e-9b82-85ae1084d6e8 - - - - - -] Device tap3f411789-3f cannot be used as it has no MAC address#033[00m Nov 23 05:04:15 localhost nova_compute[280939]: 2025-11-23 10:04:15.784 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:15 localhost kernel: device tap3f411789-3f entered promiscuous mode Nov 23 05:04:15 localhost NetworkManager[5966]: [1763892255.7935] manager: (tap3f411789-3f): new Generic device (/org/freedesktop/NetworkManager/Devices/48) Nov 23 05:04:15 localhost ovn_controller[153771]: 2025-11-23T10:04:15Z|00287|binding|INFO|Claiming lport 3f411789-3f6d-4f18-915e-3814f4c2617f for this chassis. Nov 23 05:04:15 localhost ovn_controller[153771]: 2025-11-23T10:04:15Z|00288|binding|INFO|3f411789-3f6d-4f18-915e-3814f4c2617f: Claiming unknown Nov 23 05:04:15 localhost systemd-udevd[320907]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:04:15 localhost nova_compute[280939]: 2025-11-23 10:04:15.797 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:15 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:15.810 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-98710900-4c46-474d-8df7-759682420b6f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98710900-4c46-474d-8df7-759682420b6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8633d61c76748a7a900f3c8cea84ef3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d8bc8af-8f44-4505-aef8-449a6bd992f6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3f411789-3f6d-4f18-915e-3814f4c2617f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:04:15 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:15.812 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 3f411789-3f6d-4f18-915e-3814f4c2617f in datapath 98710900-4c46-474d-8df7-759682420b6f bound to our chassis#033[00m Nov 23 05:04:15 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:15.815 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port 33e9e2ac-8eb6-4080-8241-972cc1b1deb8 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 05:04:15 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:15.815 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 98710900-4c46-474d-8df7-759682420b6f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:04:15 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:15.815 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[87038d8a-751f-4d07-a70a-eeacd0c7d5dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:04:15 localhost ovn_controller[153771]: 2025-11-23T10:04:15Z|00289|binding|INFO|Setting lport 3f411789-3f6d-4f18-915e-3814f4c2617f ovn-installed in OVS Nov 23 05:04:15 localhost ovn_controller[153771]: 2025-11-23T10:04:15Z|00290|binding|INFO|Setting lport 3f411789-3f6d-4f18-915e-3814f4c2617f up in Southbound Nov 23 05:04:15 localhost nova_compute[280939]: 2025-11-23 10:04:15.821 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:15 localhost nova_compute[280939]: 2025-11-23 10:04:15.842 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:15 localhost nova_compute[280939]: 2025-11-23 10:04:15.882 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:15 localhost nova_compute[280939]: 2025-11-23 10:04:15.909 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:16 localhost nova_compute[280939]: 2025-11-23 10:04:16.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:04:16 localhost nova_compute[280939]: 2025-11-23 10:04:16.320 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:16 localhost podman[320961]: Nov 23 05:04:16 localhost podman[320961]: 2025-11-23 10:04:16.868295222 +0000 UTC m=+0.095225067 container create 0550ebf2ddbdac164debcf8bb83f76798cc25b25ac1ddfa1991dd1a24fc69708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-98710900-4c46-474d-8df7-759682420b6f, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 05:04:16 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:16.872 262301 INFO neutron.agent.linux.ip_lib [None req-fa76d3e1-3ff0-451b-a5ee-121acdd770c4 - - - - - -] Device tap79f2471d-b1 cannot be used as it has no MAC address#033[00m Nov 23 05:04:16 localhost nova_compute[280939]: 2025-11-23 10:04:16.899 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:16 localhost kernel: device tap79f2471d-b1 entered promiscuous mode Nov 23 05:04:16 localhost systemd-udevd[320909]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:04:16 localhost NetworkManager[5966]: [1763892256.9065] manager: (tap79f2471d-b1): new Generic device (/org/freedesktop/NetworkManager/Devices/49) Nov 23 05:04:16 localhost nova_compute[280939]: 2025-11-23 10:04:16.906 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:16 localhost ovn_controller[153771]: 2025-11-23T10:04:16Z|00291|binding|INFO|Claiming lport 79f2471d-b1b5-4f3b-a194-04c5e4ea61a2 for this chassis. Nov 23 05:04:16 localhost systemd[1]: Started libpod-conmon-0550ebf2ddbdac164debcf8bb83f76798cc25b25ac1ddfa1991dd1a24fc69708.scope. Nov 23 05:04:16 localhost ovn_controller[153771]: 2025-11-23T10:04:16Z|00292|binding|INFO|79f2471d-b1b5-4f3b-a194-04c5e4ea61a2: Claiming unknown Nov 23 05:04:16 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:16.917 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-ec797e03-2fed-45fd-bc13-a1751e1f8db9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec797e03-2fed-45fd-bc13-a1751e1f8db9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd27ceae55c44d478998092e7554fd8a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84385636-7359-43cd-8be7-3201761b9734, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=79f2471d-b1b5-4f3b-a194-04c5e4ea61a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:04:16 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:16.919 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 79f2471d-b1b5-4f3b-a194-04c5e4ea61a2 in datapath ec797e03-2fed-45fd-bc13-a1751e1f8db9 bound to our chassis#033[00m Nov 23 05:04:16 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:16.921 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ec797e03-2fed-45fd-bc13-a1751e1f8db9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:04:16 localhost podman[320961]: 2025-11-23 10:04:16.823838622 +0000 UTC m=+0.050768497 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:04:16 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:16.922 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[f2cc9b08-2d12-455b-8a45-ba9e535915d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:04:16 localhost systemd[1]: Started libcrun container. Nov 23 05:04:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e913db41ba12a4abc916596825206a11e3aa29ee168cd74389b5bcb1878d17e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:04:16 localhost ovn_controller[153771]: 2025-11-23T10:04:16Z|00293|binding|INFO|Setting lport 79f2471d-b1b5-4f3b-a194-04c5e4ea61a2 ovn-installed in OVS Nov 23 05:04:16 localhost ovn_controller[153771]: 2025-11-23T10:04:16Z|00294|binding|INFO|Setting lport 79f2471d-b1b5-4f3b-a194-04c5e4ea61a2 up in Southbound Nov 23 05:04:16 localhost nova_compute[280939]: 2025-11-23 10:04:16.948 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:16 localhost podman[320961]: 2025-11-23 10:04:16.95385308 +0000 UTC m=+0.180782935 container init 0550ebf2ddbdac164debcf8bb83f76798cc25b25ac1ddfa1991dd1a24fc69708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-98710900-4c46-474d-8df7-759682420b6f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 23 05:04:16 localhost podman[320961]: 2025-11-23 10:04:16.965908381 +0000 UTC m=+0.192838226 container start 0550ebf2ddbdac164debcf8bb83f76798cc25b25ac1ddfa1991dd1a24fc69708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-98710900-4c46-474d-8df7-759682420b6f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:04:16 localhost dnsmasq[320994]: started, version 2.85 cachesize 150 Nov 23 05:04:16 localhost dnsmasq[320994]: DNS service limited to local subnets Nov 23 05:04:16 localhost dnsmasq[320994]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:04:16 localhost dnsmasq[320994]: warning: no upstream servers configured Nov 23 05:04:16 localhost dnsmasq-dhcp[320994]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:04:16 localhost dnsmasq[320994]: read /var/lib/neutron/dhcp/98710900-4c46-474d-8df7-759682420b6f/addn_hosts - 0 addresses Nov 23 05:04:16 localhost dnsmasq-dhcp[320994]: read /var/lib/neutron/dhcp/98710900-4c46-474d-8df7-759682420b6f/host Nov 23 05:04:16 localhost dnsmasq-dhcp[320994]: read /var/lib/neutron/dhcp/98710900-4c46-474d-8df7-759682420b6f/opts Nov 23 05:04:16 localhost nova_compute[280939]: 2025-11-23 10:04:16.985 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:17 localhost nova_compute[280939]: 2025-11-23 10:04:17.012 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:17 localhost podman[239764]: time="2025-11-23T10:04:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:04:17 localhost podman[239764]: @ - - [23/Nov/2025:10:04:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156323 "" "Go-http-client/1.1" Nov 23 05:04:17 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:17.113 262301 INFO neutron.agent.dhcp.agent [None req-d2d331aa-6384-4e6e-ba8a-9ee0a37ce70f - - - - - -] DHCP configuration for ports {'1eb260db-2a37-4332-ae56-901810cdb169'} is completed#033[00m Nov 23 05:04:17 localhost podman[239764]: @ - - [23/Nov/2025:10:04:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19206 "" "Go-http-client/1.1" Nov 23 05:04:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e163 do_prune osdmap full prune enabled Nov 23 05:04:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e164 e164: 6 total, 6 up, 6 in Nov 23 05:04:17 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e164: 6 total, 6 up, 6 in Nov 23 05:04:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v347: 177 pgs: 177 active+clean; 225 MiB data, 1017 MiB used, 41 GiB / 42 GiB avail; 429 KiB/s rd, 2.6 MiB/s wr, 77 op/s Nov 23 05:04:17 localhost nova_compute[280939]: 2025-11-23 10:04:17.646 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:17 localhost podman[321043]: Nov 23 05:04:17 localhost podman[321043]: 2025-11-23 10:04:17.808910495 +0000 UTC m=+0.092233715 container create f9b4a23fd0dce248ff501f04a9a51f9fe57a7069142098e76f22f6686c9c95d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ec797e03-2fed-45fd-bc13-a1751e1f8db9, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:04:17 localhost systemd[1]: Started libpod-conmon-f9b4a23fd0dce248ff501f04a9a51f9fe57a7069142098e76f22f6686c9c95d0.scope. Nov 23 05:04:17 localhost systemd[1]: Started libcrun container. Nov 23 05:04:17 localhost podman[321043]: 2025-11-23 10:04:17.764864887 +0000 UTC m=+0.048188157 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:04:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2a919793a908292ba803fb646f1099a6b7d53e44cbbfec6c30d541831dee1c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:04:17 localhost podman[321043]: 2025-11-23 10:04:17.916132512 +0000 UTC m=+0.199455732 container init f9b4a23fd0dce248ff501f04a9a51f9fe57a7069142098e76f22f6686c9c95d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ec797e03-2fed-45fd-bc13-a1751e1f8db9, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118) Nov 23 05:04:17 localhost nova_compute[280939]: 2025-11-23 10:04:17.916 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:17 localhost podman[321043]: 2025-11-23 10:04:17.924789528 +0000 UTC m=+0.208112768 container start f9b4a23fd0dce248ff501f04a9a51f9fe57a7069142098e76f22f6686c9c95d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ec797e03-2fed-45fd-bc13-a1751e1f8db9, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:04:17 localhost dnsmasq[321062]: started, version 2.85 cachesize 150 Nov 23 05:04:17 localhost dnsmasq[321062]: DNS service limited to local subnets Nov 23 05:04:17 localhost dnsmasq[321062]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:04:17 localhost dnsmasq[321062]: warning: no upstream servers configured Nov 23 05:04:17 localhost dnsmasq-dhcp[321062]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 23 05:04:17 localhost dnsmasq[321062]: read /var/lib/neutron/dhcp/ec797e03-2fed-45fd-bc13-a1751e1f8db9/addn_hosts - 0 addresses Nov 23 05:04:17 localhost dnsmasq-dhcp[321062]: read /var/lib/neutron/dhcp/ec797e03-2fed-45fd-bc13-a1751e1f8db9/host Nov 23 05:04:17 localhost dnsmasq-dhcp[321062]: read /var/lib/neutron/dhcp/ec797e03-2fed-45fd-bc13-a1751e1f8db9/opts Nov 23 05:04:17 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:17.981 262301 INFO neutron.agent.dhcp.agent [None req-fa76d3e1-3ff0-451b-a5ee-121acdd770c4 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:04:16Z, description=, device_id=6f4f97fd-43ff-46f9-88a9-6d80baeae99e, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=477b5a64-a010-422b-b200-affa7712e99a, ip_allocation=immediate, mac_address=fa:16:3e:37:e6:87, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:04:14Z, description=, dns_domain=, id=ec797e03-2fed-45fd-bc13-a1751e1f8db9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1974612455, port_security_enabled=True, project_id=cd27ceae55c44d478998092e7554fd8a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15203, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2509, status=ACTIVE, subnets=['fc17c1fe-c134-4c3c-8b6a-0d62638bc843'], tags=[], tenant_id=cd27ceae55c44d478998092e7554fd8a, updated_at=2025-11-23T10:04:15Z, vlan_transparent=None, network_id=ec797e03-2fed-45fd-bc13-a1751e1f8db9, port_security_enabled=False, project_id=cd27ceae55c44d478998092e7554fd8a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2532, status=DOWN, tags=[], tenant_id=cd27ceae55c44d478998092e7554fd8a, updated_at=2025-11-23T10:04:16Z on network ec797e03-2fed-45fd-bc13-a1751e1f8db9#033[00m Nov 23 05:04:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:04:18 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:18.071 262301 INFO neutron.agent.dhcp.agent [None req-db1b9de7-ecef-4c1a-9e9c-d9a59cc82544 - - - - - -] DHCP configuration for ports {'a09a6d91-fd17-40ac-9506-bfcc826e42eb'} is completed#033[00m Nov 23 05:04:18 localhost nova_compute[280939]: 2025-11-23 10:04:18.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:04:18 localhost nova_compute[280939]: 2025-11-23 10:04:18.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 05:04:18 localhost dnsmasq[321062]: read /var/lib/neutron/dhcp/ec797e03-2fed-45fd-bc13-a1751e1f8db9/addn_hosts - 1 addresses Nov 23 05:04:18 localhost dnsmasq-dhcp[321062]: read /var/lib/neutron/dhcp/ec797e03-2fed-45fd-bc13-a1751e1f8db9/host Nov 23 05:04:18 localhost dnsmasq-dhcp[321062]: read /var/lib/neutron/dhcp/ec797e03-2fed-45fd-bc13-a1751e1f8db9/opts Nov 23 05:04:18 localhost podman[321079]: 2025-11-23 10:04:18.158725121 +0000 UTC m=+0.060521606 container kill f9b4a23fd0dce248ff501f04a9a51f9fe57a7069142098e76f22f6686c9c95d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ec797e03-2fed-45fd-bc13-a1751e1f8db9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true) Nov 23 05:04:18 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:18.313 262301 INFO neutron.agent.dhcp.agent [None req-fa76d3e1-3ff0-451b-a5ee-121acdd770c4 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:04:16Z, description=, device_id=6f4f97fd-43ff-46f9-88a9-6d80baeae99e, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=477b5a64-a010-422b-b200-affa7712e99a, ip_allocation=immediate, mac_address=fa:16:3e:37:e6:87, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:04:14Z, description=, dns_domain=, id=ec797e03-2fed-45fd-bc13-a1751e1f8db9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1974612455, port_security_enabled=True, project_id=cd27ceae55c44d478998092e7554fd8a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15203, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2509, status=ACTIVE, subnets=['fc17c1fe-c134-4c3c-8b6a-0d62638bc843'], tags=[], tenant_id=cd27ceae55c44d478998092e7554fd8a, updated_at=2025-11-23T10:04:15Z, vlan_transparent=None, network_id=ec797e03-2fed-45fd-bc13-a1751e1f8db9, port_security_enabled=False, project_id=cd27ceae55c44d478998092e7554fd8a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2532, status=DOWN, tags=[], tenant_id=cd27ceae55c44d478998092e7554fd8a, updated_at=2025-11-23T10:04:16Z on network ec797e03-2fed-45fd-bc13-a1751e1f8db9#033[00m Nov 23 05:04:18 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:18.493 262301 INFO neutron.agent.dhcp.agent [None req-2eb02633-17a0-44cd-af75-9756fdc004c0 - - - - - -] DHCP configuration for ports {'477b5a64-a010-422b-b200-affa7712e99a'} is completed#033[00m Nov 23 05:04:18 localhost dnsmasq[321062]: read /var/lib/neutron/dhcp/ec797e03-2fed-45fd-bc13-a1751e1f8db9/addn_hosts - 1 addresses Nov 23 05:04:18 localhost dnsmasq-dhcp[321062]: read /var/lib/neutron/dhcp/ec797e03-2fed-45fd-bc13-a1751e1f8db9/host Nov 23 05:04:18 localhost podman[321117]: 2025-11-23 10:04:18.504772192 +0000 UTC m=+0.059309960 container kill f9b4a23fd0dce248ff501f04a9a51f9fe57a7069142098e76f22f6686c9c95d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ec797e03-2fed-45fd-bc13-a1751e1f8db9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 05:04:18 localhost dnsmasq-dhcp[321062]: read /var/lib/neutron/dhcp/ec797e03-2fed-45fd-bc13-a1751e1f8db9/opts Nov 23 05:04:19 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:19.012 262301 INFO neutron.agent.dhcp.agent [None req-c3777cd2-d2f0-4b73-aa2c-09e6c3e8a41b - - - - - -] DHCP configuration for ports {'477b5a64-a010-422b-b200-affa7712e99a'} is completed#033[00m Nov 23 05:04:19 localhost nova_compute[280939]: 2025-11-23 10:04:19.129 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:04:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v348: 177 pgs: 177 active+clean; 225 MiB data, 1017 MiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 133 KiB/s wr, 79 op/s Nov 23 05:04:19 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e164 do_prune osdmap full prune enabled Nov 23 05:04:19 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e165 e165: 6 total, 6 up, 6 in Nov 23 05:04:19 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e165: 6 total, 6 up, 6 in Nov 23 05:04:19 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:19.676 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:9f:5c 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-accb7a47-b6e0-4b0c-81bc-2cbf0d4d0f4e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-accb7a47-b6e0-4b0c-81bc-2cbf0d4d0f4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a81c63d1-c197-41eb-93f7-be983c9ed80d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b87e3e64-b6cc-4f08-95a6-de593e031494) old=Port_Binding(mac=['fa:16:3e:af:9f:5c 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-accb7a47-b6e0-4b0c-81bc-2cbf0d4d0f4e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-accb7a47-b6e0-4b0c-81bc-2cbf0d4d0f4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:04:19 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:19.678 159415 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b87e3e64-b6cc-4f08-95a6-de593e031494 in datapath accb7a47-b6e0-4b0c-81bc-2cbf0d4d0f4e updated#033[00m Nov 23 05:04:19 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:19.681 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network accb7a47-b6e0-4b0c-81bc-2cbf0d4d0f4e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:04:19 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:19.682 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[e580ae6b-1914-4844-a008-8facd7d14d36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:04:20 localhost nova_compute[280939]: 2025-11-23 10:04:20.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:04:21 localhost nova_compute[280939]: 2025-11-23 10:04:21.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:04:21 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:21.137 2 INFO neutron.agent.securitygroups_rpc [None req-b505d753-a321-4285-8b8a-57c320b6a991 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:04:21 localhost nova_compute[280939]: 2025-11-23 10:04:21.153 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:04:21 localhost nova_compute[280939]: 2025-11-23 10:04:21.154 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:04:21 localhost nova_compute[280939]: 2025-11-23 10:04:21.154 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:04:21 localhost nova_compute[280939]: 2025-11-23 10:04:21.154 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 05:04:21 localhost nova_compute[280939]: 2025-11-23 10:04:21.155 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:04:21 localhost nova_compute[280939]: 2025-11-23 10:04:21.354 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v350: 177 pgs: 177 active+clean; 225 MiB data, 1017 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 166 KiB/s wr, 99 op/s Nov 23 05:04:21 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e165 do_prune osdmap full prune enabled Nov 23 05:04:21 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e166 e166: 6 total, 6 up, 6 in Nov 23 05:04:21 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e166: 6 total, 6 up, 6 in Nov 23 05:04:21 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7e0a27d8-81b8-442f-a3db-fa38a09d28ae", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:04:21 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7e0a27d8-81b8-442f-a3db-fa38a09d28ae, vol_name:cephfs) < "" Nov 23 05:04:21 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:04:21 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3320085940' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:04:21 localhost nova_compute[280939]: 2025-11-23 10:04:21.758 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:04:21 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae/.meta.tmp' Nov 23 05:04:21 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae/.meta.tmp' to config b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae/.meta' Nov 23 05:04:21 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7e0a27d8-81b8-442f-a3db-fa38a09d28ae, vol_name:cephfs) < "" Nov 23 05:04:21 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7e0a27d8-81b8-442f-a3db-fa38a09d28ae", "format": "json"}]: dispatch Nov 23 05:04:21 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7e0a27d8-81b8-442f-a3db-fa38a09d28ae, vol_name:cephfs) < "" Nov 23 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:04:21 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7e0a27d8-81b8-442f-a3db-fa38a09d28ae, vol_name:cephfs) < "" Nov 23 05:04:21 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:04:21 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:04:21 localhost systemd[1]: tmp-crun.N30942.mount: Deactivated successfully. Nov 23 05:04:21 localhost podman[321160]: 2025-11-23 10:04:21.902365084 +0000 UTC m=+0.089738487 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 05:04:21 localhost podman[321160]: 2025-11-23 10:04:21.937766195 +0000 UTC m=+0.125139658 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent) Nov 23 05:04:21 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:04:21 localhost nova_compute[280939]: 2025-11-23 10:04:21.995 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 05:04:21 localhost nova_compute[280939]: 2025-11-23 10:04:21.996 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11523MB free_disk=41.70033645629883GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 05:04:21 localhost nova_compute[280939]: 2025-11-23 10:04:21.997 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:04:21 localhost nova_compute[280939]: 2025-11-23 10:04:21.997 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:04:22 localhost nova_compute[280939]: 2025-11-23 10:04:22.069 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 05:04:22 localhost nova_compute[280939]: 2025-11-23 10:04:22.069 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 05:04:22 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:22.088 2 INFO neutron.agent.securitygroups_rpc [None req-71e1d61a-9581-46c3-850d-9298f6399521 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:04:22 localhost nova_compute[280939]: 2025-11-23 10:04:22.267 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Refreshing inventories for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 23 05:04:22 localhost nova_compute[280939]: 2025-11-23 10:04:22.288 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Updating ProviderTree inventory for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 23 05:04:22 localhost nova_compute[280939]: 2025-11-23 10:04:22.289 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Updating inventory in ProviderTree for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 23 05:04:22 localhost nova_compute[280939]: 2025-11-23 10:04:22.311 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Refreshing aggregate associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 23 05:04:22 localhost nova_compute[280939]: 2025-11-23 10:04:22.330 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Refreshing trait associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 23 05:04:22 localhost nova_compute[280939]: 2025-11-23 10:04:22.350 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:04:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e166 do_prune osdmap full prune enabled Nov 23 05:04:22 localhost nova_compute[280939]: 2025-11-23 10:04:22.692 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e167 e167: 6 total, 6 up, 6 in Nov 23 05:04:22 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e167: 6 total, 6 up, 6 in Nov 23 05:04:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:04:22 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2622383035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:04:22 localhost nova_compute[280939]: 2025-11-23 10:04:22.869 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:04:22 localhost nova_compute[280939]: 2025-11-23 10:04:22.875 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 05:04:22 localhost nova_compute[280939]: 2025-11-23 10:04:22.899 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 05:04:22 localhost nova_compute[280939]: 2025-11-23 10:04:22.902 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 05:04:22 localhost nova_compute[280939]: 2025-11-23 10:04:22.903 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:04:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:04:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_10:04:23 Nov 23 05:04:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 05:04:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 05:04:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'images', 'manila_data', '.mgr', 'manila_metadata'] Nov 23 05:04:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 05:04:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:04:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:04:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:04:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:04:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:04:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:04:23 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:23.445 2 INFO neutron.agent.securitygroups_rpc [None req-360628bd-6ea7-46e4-a35e-1747acc2d18c 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:04:23 localhost nova_compute[280939]: 2025-11-23 10:04:23.453 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v353: 177 pgs: 177 active+clean; 225 MiB data, 1017 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 30 KiB/s wr, 60 op/s Nov 23 05:04:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 05:04:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:04:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 05:04:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:04:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006580482708682301 of space, bias 1.0, pg target 1.31609654173646 quantized to 32 (current 32) Nov 23 05:04:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:04:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.0002712673611111111 quantized to 32 (current 32) Nov 23 05:04:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:04:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00430020109226689 of space, bias 1.0, pg target 0.8557400173611112 quantized to 32 (current 32) Nov 23 05:04:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:04:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.4071718546435884e-05 quantized to 32 (current 32) Nov 23 05:04:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:04:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 05:04:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:04:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 1.1995742601898381e-05 of space, bias 4.0, pg target 0.009516622464172715 quantized to 16 (current 16) Nov 23 05:04:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 05:04:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:04:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 05:04:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:04:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:04:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:04:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:04:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:04:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:04:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:04:23 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:23.633 262301 INFO neutron.agent.linux.ip_lib [None req-018f7b77-6be4-44ab-b1d6-f58a754606a9 - - - - - -] Device tapacc5f86d-6f cannot be used as it has no MAC address#033[00m Nov 23 05:04:23 localhost nova_compute[280939]: 2025-11-23 10:04:23.657 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:23 localhost kernel: device tapacc5f86d-6f entered promiscuous mode Nov 23 05:04:23 localhost NetworkManager[5966]: [1763892263.6654] manager: (tapacc5f86d-6f): new Generic device (/org/freedesktop/NetworkManager/Devices/50) Nov 23 05:04:23 localhost systemd-udevd[321213]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:04:23 localhost ovn_controller[153771]: 2025-11-23T10:04:23Z|00295|binding|INFO|Claiming lport acc5f86d-6f7b-418a-a709-1db740660938 for this chassis. Nov 23 05:04:23 localhost ovn_controller[153771]: 2025-11-23T10:04:23Z|00296|binding|INFO|acc5f86d-6f7b-418a-a709-1db740660938: Claiming unknown Nov 23 05:04:23 localhost nova_compute[280939]: 2025-11-23 10:04:23.669 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:23 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:23.693 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-8ce64d8c-93da-441d-aa9c-4d37e8c489ca', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8ce64d8c-93da-441d-aa9c-4d37e8c489ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8633d61c76748a7a900f3c8cea84ef3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afb8a427-93b9-4801-bd72-1072a1ff7873, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=acc5f86d-6f7b-418a-a709-1db740660938) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:04:23 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:23.694 159415 INFO neutron.agent.ovn.metadata.agent [-] Port acc5f86d-6f7b-418a-a709-1db740660938 in datapath 8ce64d8c-93da-441d-aa9c-4d37e8c489ca bound to our chassis#033[00m Nov 23 05:04:23 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:23.696 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8ce64d8c-93da-441d-aa9c-4d37e8c489ca or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:04:23 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:23.697 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[86264eb9-0a22-4103-91e7-6c0f04231b0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:04:23 localhost journal[229336]: ethtool ioctl error on tapacc5f86d-6f: No such device Nov 23 05:04:23 localhost journal[229336]: ethtool ioctl error on tapacc5f86d-6f: No such device Nov 23 05:04:23 localhost ovn_controller[153771]: 2025-11-23T10:04:23Z|00297|binding|INFO|Setting lport acc5f86d-6f7b-418a-a709-1db740660938 ovn-installed in OVS Nov 23 05:04:23 localhost ovn_controller[153771]: 2025-11-23T10:04:23Z|00298|binding|INFO|Setting lport acc5f86d-6f7b-418a-a709-1db740660938 up in Southbound Nov 23 05:04:23 localhost nova_compute[280939]: 2025-11-23 10:04:23.709 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e167 do_prune osdmap full prune enabled Nov 23 05:04:23 localhost journal[229336]: ethtool ioctl error on tapacc5f86d-6f: No such device Nov 23 05:04:23 localhost journal[229336]: ethtool ioctl error on tapacc5f86d-6f: No such device Nov 23 05:04:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e168 e168: 6 total, 6 up, 6 in Nov 23 05:04:23 localhost journal[229336]: ethtool ioctl error on tapacc5f86d-6f: No such device Nov 23 05:04:23 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e168: 6 total, 6 up, 6 in Nov 23 05:04:23 localhost journal[229336]: ethtool ioctl error on tapacc5f86d-6f: No such device Nov 23 05:04:23 localhost journal[229336]: ethtool ioctl error on tapacc5f86d-6f: No such device Nov 23 05:04:23 localhost journal[229336]: ethtool ioctl error on tapacc5f86d-6f: No such device Nov 23 05:04:23 localhost nova_compute[280939]: 2025-11-23 10:04:23.754 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:23 localhost nova_compute[280939]: 2025-11-23 10:04:23.793 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:23 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:23.979 2 INFO neutron.agent.securitygroups_rpc [None req-53980d87-428d-4527-8575-9963b178026f 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:04:24 localhost podman[321284]: Nov 23 05:04:24 localhost podman[321284]: 2025-11-23 10:04:24.704567928 +0000 UTC m=+0.085910690 container create 086ad39615b59c30b293da8b5fe3b91286275ed24bce81219585d30aa0cde919 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8ce64d8c-93da-441d-aa9c-4d37e8c489ca, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS) Nov 23 05:04:24 localhost systemd[1]: Started libpod-conmon-086ad39615b59c30b293da8b5fe3b91286275ed24bce81219585d30aa0cde919.scope. Nov 23 05:04:24 localhost systemd[1]: Started libcrun container. Nov 23 05:04:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe2e08f5054554d1b324fedecc5d6837f722ff25bf05ca6e6face6988b66898c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:04:24 localhost podman[321284]: 2025-11-23 10:04:24.662769699 +0000 UTC m=+0.044112491 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:04:24 localhost podman[321284]: 2025-11-23 10:04:24.770308385 +0000 UTC m=+0.151651157 container init 086ad39615b59c30b293da8b5fe3b91286275ed24bce81219585d30aa0cde919 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8ce64d8c-93da-441d-aa9c-4d37e8c489ca, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:04:24 localhost podman[321284]: 2025-11-23 10:04:24.778840708 +0000 UTC m=+0.160183470 container start 086ad39615b59c30b293da8b5fe3b91286275ed24bce81219585d30aa0cde919 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8ce64d8c-93da-441d-aa9c-4d37e8c489ca, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:04:24 localhost dnsmasq[321302]: started, version 2.85 cachesize 150 Nov 23 05:04:24 localhost dnsmasq[321302]: DNS service limited to local subnets Nov 23 05:04:24 localhost dnsmasq[321302]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:04:24 localhost dnsmasq[321302]: warning: no upstream servers configured Nov 23 05:04:24 localhost dnsmasq-dhcp[321302]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:04:24 localhost dnsmasq[321302]: read /var/lib/neutron/dhcp/8ce64d8c-93da-441d-aa9c-4d37e8c489ca/addn_hosts - 0 addresses Nov 23 05:04:24 localhost dnsmasq-dhcp[321302]: read /var/lib/neutron/dhcp/8ce64d8c-93da-441d-aa9c-4d37e8c489ca/host Nov 23 05:04:24 localhost dnsmasq-dhcp[321302]: read /var/lib/neutron/dhcp/8ce64d8c-93da-441d-aa9c-4d37e8c489ca/opts Nov 23 05:04:24 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:24.947 262301 INFO neutron.agent.dhcp.agent [None req-5e6a1d2b-ad24-49ba-8f9c-cc9328d857fc - - - - - -] DHCP configuration for ports {'adbf7839-9adc-4d0d-8ede-2fdfbe9852f8'} is completed#033[00m Nov 23 05:04:25 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7e0a27d8-81b8-442f-a3db-fa38a09d28ae", "snap_name": "5aacd1b6-f8f5-4003-8630-0121025e58d0", "format": "json"}]: dispatch Nov 23 05:04:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5aacd1b6-f8f5-4003-8630-0121025e58d0, sub_name:7e0a27d8-81b8-442f-a3db-fa38a09d28ae, vol_name:cephfs) < "" Nov 23 05:04:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5aacd1b6-f8f5-4003-8630-0121025e58d0, sub_name:7e0a27d8-81b8-442f-a3db-fa38a09d28ae, vol_name:cephfs) < "" Nov 23 05:04:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:04:25 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2125767633' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:04:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v355: 177 pgs: 177 active+clean; 271 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 141 KiB/s rd, 3.6 MiB/s wr, 208 op/s Nov 23 05:04:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e168 do_prune osdmap full prune enabled Nov 23 05:04:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e169 e169: 6 total, 6 up, 6 in Nov 23 05:04:25 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e169: 6 total, 6 up, 6 in Nov 23 05:04:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:04:26 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3931301299' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:04:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:04:26 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3931301299' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:04:26 localhost nova_compute[280939]: 2025-11-23 10:04:26.357 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e169 do_prune osdmap full prune enabled Nov 23 05:04:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e170 e170: 6 total, 6 up, 6 in Nov 23 05:04:26 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e170: 6 total, 6 up, 6 in Nov 23 05:04:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 271 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 174 KiB/s rd, 4.5 MiB/s wr, 256 op/s Nov 23 05:04:27 localhost nova_compute[280939]: 2025-11-23 10:04:27.723 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e170 do_prune osdmap full prune enabled Nov 23 05:04:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e171 e171: 6 total, 6 up, 6 in Nov 23 05:04:27 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e171: 6 total, 6 up, 6 in Nov 23 05:04:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 23 05:04:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e171 do_prune osdmap full prune enabled Nov 23 05:04:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e172 e172: 6 total, 6 up, 6 in Nov 23 05:04:28 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e172: 6 total, 6 up, 6 in Nov 23 05:04:28 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "7e0a27d8-81b8-442f-a3db-fa38a09d28ae", "snap_name": "5aacd1b6-f8f5-4003-8630-0121025e58d0", "target_sub_name": "b0d4ea08-4592-4af4-b78a-914919545708", "format": "json"}]: dispatch Nov 23 05:04:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:5aacd1b6-f8f5-4003-8630-0121025e58d0, sub_name:7e0a27d8-81b8-442f-a3db-fa38a09d28ae, target_sub_name:b0d4ea08-4592-4af4-b78a-914919545708, vol_name:cephfs) < "" Nov 23 05:04:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/b0d4ea08-4592-4af4-b78a-914919545708/.meta.tmp' Nov 23 05:04:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b0d4ea08-4592-4af4-b78a-914919545708/.meta.tmp' to config b'/volumes/_nogroup/b0d4ea08-4592-4af4-b78a-914919545708/.meta' Nov 23 05:04:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 727591b1-a1a7-487a-b7f6-b3ba2bddf956 for path b'/volumes/_nogroup/b0d4ea08-4592-4af4-b78a-914919545708' Nov 23 05:04:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae/.meta.tmp' Nov 23 05:04:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae/.meta.tmp' to config b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae/.meta' Nov 23 05:04:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:04:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:5aacd1b6-f8f5-4003-8630-0121025e58d0, sub_name:7e0a27d8-81b8-442f-a3db-fa38a09d28ae, target_sub_name:b0d4ea08-4592-4af4-b78a-914919545708, vol_name:cephfs) < "" Nov 23 05:04:28 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b0d4ea08-4592-4af4-b78a-914919545708", "format": "json"}]: dispatch Nov 23 05:04:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b0d4ea08-4592-4af4-b78a-914919545708, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:04:28 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:04:28.844+0000 7f9cccb12640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:04:28.844+0000 7f9cccb12640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:04:28.844+0000 7f9cccb12640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:04:28.844+0000 7f9cccb12640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:04:28.844+0000 7f9cccb12640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b0d4ea08-4592-4af4-b78a-914919545708, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:04:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/b0d4ea08-4592-4af4-b78a-914919545708 Nov 23 05:04:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, b0d4ea08-4592-4af4-b78a-914919545708) Nov 23 05:04:28 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:04:28.875+0000 7f9ccc311640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:04:28.875+0000 7f9ccc311640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:04:28.875+0000 7f9ccc311640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:04:28.875+0000 7f9ccc311640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:04:28.875+0000 7f9ccc311640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:04:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, b0d4ea08-4592-4af4-b78a-914919545708) -- by 0 seconds Nov 23 05:04:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/b0d4ea08-4592-4af4-b78a-914919545708/.meta.tmp' Nov 23 05:04:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b0d4ea08-4592-4af4-b78a-914919545708/.meta.tmp' to config b'/volumes/_nogroup/b0d4ea08-4592-4af4-b78a-914919545708/.meta' Nov 23 05:04:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e172 do_prune osdmap full prune enabled Nov 23 05:04:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e173 e173: 6 total, 6 up, 6 in Nov 23 05:04:29 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e173: 6 total, 6 up, 6 in Nov 23 05:04:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v362: 177 pgs: 177 active+clean; 225 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 234 KiB/s rd, 42 KiB/s wr, 324 op/s Nov 23 05:04:29 localhost nova_compute[280939]: 2025-11-23 10:04:29.735 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:04:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:04:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:04:29 localhost podman[321327]: 2025-11-23 10:04:29.891302716 +0000 UTC m=+0.079431150 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 23 05:04:29 localhost podman[321327]: 2025-11-23 10:04:29.908364172 +0000 UTC m=+0.096492606 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Nov 23 05:04:29 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:04:29 localhost podman[321329]: 2025-11-23 10:04:29.958159618 +0000 UTC m=+0.137815811 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 23 05:04:30 localhost systemd[1]: tmp-crun.9b7S08.mount: Deactivated successfully. Nov 23 05:04:30 localhost podman[321328]: 2025-11-23 10:04:30.049063451 +0000 UTC m=+0.233361247 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:04:30 localhost podman[321329]: 2025-11-23 10:04:30.065293911 +0000 UTC m=+0.244950054 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller) Nov 23 05:04:30 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:04:30 localhost podman[321328]: 2025-11-23 10:04:30.083026748 +0000 UTC m=+0.267324544 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:04:30 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:04:30 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e47: np0005532584.naxwxy(active, since 9m), standbys: np0005532585.gzafiw, np0005532586.thmvqb Nov 23 05:04:30 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:30.889 262301 INFO neutron.agent.linux.ip_lib [None req-2179394f-f13b-4aa8-b444-17637d279501 - - - - - -] Device tapb8c1cfda-6d cannot be used as it has no MAC address#033[00m Nov 23 05:04:30 localhost nova_compute[280939]: 2025-11-23 10:04:30.912 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:30 localhost kernel: device tapb8c1cfda-6d entered promiscuous mode Nov 23 05:04:30 localhost nova_compute[280939]: 2025-11-23 10:04:30.919 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:30 localhost NetworkManager[5966]: [1763892270.9199] manager: (tapb8c1cfda-6d): new Generic device (/org/freedesktop/NetworkManager/Devices/51) Nov 23 05:04:30 localhost systemd-udevd[321404]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:04:30 localhost ovn_controller[153771]: 2025-11-23T10:04:30Z|00299|binding|INFO|Claiming lport b8c1cfda-6dfa-4acd-bcd2-d6819d28d7aa for this chassis. Nov 23 05:04:30 localhost ovn_controller[153771]: 2025-11-23T10:04:30Z|00300|binding|INFO|b8c1cfda-6dfa-4acd-bcd2-d6819d28d7aa: Claiming unknown Nov 23 05:04:30 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:30.935 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-96153b48-2c08-4a3f-a6d9-b9089249ef08', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96153b48-2c08-4a3f-a6d9-b9089249ef08', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff24b2c1-5d71-49e0-ab46-3a3b3a7159e5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b8c1cfda-6dfa-4acd-bcd2-d6819d28d7aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:04:30 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:30.937 159415 INFO neutron.agent.ovn.metadata.agent [-] Port b8c1cfda-6dfa-4acd-bcd2-d6819d28d7aa in datapath 96153b48-2c08-4a3f-a6d9-b9089249ef08 bound to our chassis#033[00m Nov 23 05:04:30 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:30.939 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 96153b48-2c08-4a3f-a6d9-b9089249ef08 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:04:30 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:30.940 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[99189cc3-db7d-4584-89af-aec7f0b6a67b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:04:30 localhost ovn_controller[153771]: 2025-11-23T10:04:30Z|00301|binding|INFO|Setting lport b8c1cfda-6dfa-4acd-bcd2-d6819d28d7aa ovn-installed in OVS Nov 23 05:04:30 localhost ovn_controller[153771]: 2025-11-23T10:04:30Z|00302|binding|INFO|Setting lport b8c1cfda-6dfa-4acd-bcd2-d6819d28d7aa up in Southbound Nov 23 05:04:30 localhost nova_compute[280939]: 2025-11-23 10:04:30.964 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:30 localhost nova_compute[280939]: 2025-11-23 10:04:30.997 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:31 localhost nova_compute[280939]: 2025-11-23 10:04:31.024 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e173 do_prune osdmap full prune enabled Nov 23 05:04:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e174 e174: 6 total, 6 up, 6 in Nov 23 05:04:31 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e174: 6 total, 6 up, 6 in Nov 23 05:04:31 localhost nova_compute[280939]: 2025-11-23 10:04:31.358 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v364: 177 pgs: 177 active+clean; 225 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 217 KiB/s rd, 39 KiB/s wr, 301 op/s Nov 23 05:04:31 localhost podman[321459]: Nov 23 05:04:31 localhost podman[321459]: 2025-11-23 10:04:31.874253319 +0000 UTC m=+0.089882142 container create 184ea9861ae410100ddc3b7ac0b4aecd85114e63cac7dd54db1ae699518ae21b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96153b48-2c08-4a3f-a6d9-b9089249ef08, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:04:31 localhost systemd[1]: Started libpod-conmon-184ea9861ae410100ddc3b7ac0b4aecd85114e63cac7dd54db1ae699518ae21b.scope. Nov 23 05:04:31 localhost systemd[1]: Started libcrun container. Nov 23 05:04:31 localhost podman[321459]: 2025-11-23 10:04:31.83045924 +0000 UTC m=+0.046088083 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:04:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406ddf4a7eafc33076f858f46f5339986839b7ff5eef27392062c18b42f34b95/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:04:31 localhost podman[321459]: 2025-11-23 10:04:31.944404182 +0000 UTC m=+0.160033005 container init 184ea9861ae410100ddc3b7ac0b4aecd85114e63cac7dd54db1ae699518ae21b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96153b48-2c08-4a3f-a6d9-b9089249ef08, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118) Nov 23 05:04:31 localhost podman[321459]: 2025-11-23 10:04:31.9540491 +0000 UTC m=+0.169677923 container start 184ea9861ae410100ddc3b7ac0b4aecd85114e63cac7dd54db1ae699518ae21b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96153b48-2c08-4a3f-a6d9-b9089249ef08, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 05:04:31 localhost dnsmasq[321477]: started, version 2.85 cachesize 150 Nov 23 05:04:31 localhost dnsmasq[321477]: DNS service limited to local subnets Nov 23 05:04:31 localhost dnsmasq[321477]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:04:31 localhost dnsmasq[321477]: warning: no upstream servers configured Nov 23 05:04:31 localhost dnsmasq-dhcp[321477]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:04:31 localhost dnsmasq[321477]: read /var/lib/neutron/dhcp/96153b48-2c08-4a3f-a6d9-b9089249ef08/addn_hosts - 0 addresses Nov 23 05:04:31 localhost dnsmasq-dhcp[321477]: read /var/lib/neutron/dhcp/96153b48-2c08-4a3f-a6d9-b9089249ef08/host Nov 23 05:04:31 localhost dnsmasq-dhcp[321477]: read /var/lib/neutron/dhcp/96153b48-2c08-4a3f-a6d9-b9089249ef08/opts Nov 23 05:04:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e174 do_prune osdmap full prune enabled Nov 23 05:04:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e175 e175: 6 total, 6 up, 6 in Nov 23 05:04:32 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e175: 6 total, 6 up, 6 in Nov 23 05:04:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:32.150 262301 INFO neutron.agent.dhcp.agent [None req-77a39647-25da-4a5a-805b-2fb4b0e7fc62 - - - - - -] DHCP configuration for ports {'73761712-46c3-47f5-82c0-fe48af81d70b'} is completed#033[00m Nov 23 05:04:32 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:32.610 2 INFO neutron.agent.securitygroups_rpc [None req-fa166c8d-7d85-4b6f-949c-c3bef6490854 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:04:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:32.659 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:04:32Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d81e5ae0-8006-4c41-a458-9bb8daec5f37, ip_allocation=immediate, mac_address=fa:16:3e:5b:37:c3, name=tempest-PortsTestJSON-1325385055, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:04:28Z, description=, dns_domain=, id=96153b48-2c08-4a3f-a6d9-b9089249ef08, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1233356320, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=59184, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2563, status=ACTIVE, subnets=['2b5205b8-64c3-4cd1-8d8e-17bf88a902a3'], tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:04:29Z, vlan_transparent=None, network_id=96153b48-2c08-4a3f-a6d9-b9089249ef08, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['cadd5356-9a8d-419a-ac04-589c2522a695'], standard_attr_id=2581, status=DOWN, tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:04:32Z on network 96153b48-2c08-4a3f-a6d9-b9089249ef08#033[00m Nov 23 05:04:32 localhost nova_compute[280939]: 2025-11-23 10:04:32.770 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:32 localhost dnsmasq[321477]: read /var/lib/neutron/dhcp/96153b48-2c08-4a3f-a6d9-b9089249ef08/addn_hosts - 1 addresses Nov 23 05:04:32 localhost dnsmasq-dhcp[321477]: read /var/lib/neutron/dhcp/96153b48-2c08-4a3f-a6d9-b9089249ef08/host Nov 23 05:04:32 localhost dnsmasq-dhcp[321477]: read /var/lib/neutron/dhcp/96153b48-2c08-4a3f-a6d9-b9089249ef08/opts Nov 23 05:04:32 localhost podman[321494]: 2025-11-23 10:04:32.888686539 +0000 UTC m=+0.059821276 container kill 184ea9861ae410100ddc3b7ac0b4aecd85114e63cac7dd54db1ae699518ae21b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96153b48-2c08-4a3f-a6d9-b9089249ef08, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118) Nov 23 05:04:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:04:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e175 do_prune osdmap full prune enabled Nov 23 05:04:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e176 e176: 6 total, 6 up, 6 in Nov 23 05:04:33 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e176: 6 total, 6 up, 6 in Nov 23 05:04:33 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:33.106 262301 INFO neutron.agent.dhcp.agent [None req-d65cb1d6-8b4a-466b-b30e-93d372488b5f - - - - - -] DHCP configuration for ports {'d81e5ae0-8006-4c41-a458-9bb8daec5f37'} is completed#033[00m Nov 23 05:04:33 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae/.snap/5aacd1b6-f8f5-4003-8630-0121025e58d0/337edaa6-e007-40fb-a40f-7acdf30759ff' to b'/volumes/_nogroup/b0d4ea08-4592-4af4-b78a-914919545708/9ac8053d-02aa-4e1c-b808-930cbd88817b' Nov 23 05:04:33 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:33.347 2 INFO neutron.agent.securitygroups_rpc [None req-4ffb6c00-2d92-41b8-843d-89b3bf39eddb 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:04:33 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/b0d4ea08-4592-4af4-b78a-914919545708/.meta.tmp' Nov 23 05:04:33 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b0d4ea08-4592-4af4-b78a-914919545708/.meta.tmp' to config b'/volumes/_nogroup/b0d4ea08-4592-4af4-b78a-914919545708/.meta' Nov 23 05:04:33 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:33.399 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:04:33Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3a696ed6-4283-469c-885b-3ee394c144d9, ip_allocation=immediate, mac_address=fa:16:3e:8a:7d:6c, name=tempest-PortsTestJSON-376539513, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:04:28Z, description=, dns_domain=, id=96153b48-2c08-4a3f-a6d9-b9089249ef08, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1233356320, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=59184, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2563, status=ACTIVE, subnets=['2b5205b8-64c3-4cd1-8d8e-17bf88a902a3'], tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:04:29Z, vlan_transparent=None, network_id=96153b48-2c08-4a3f-a6d9-b9089249ef08, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['cadd5356-9a8d-419a-ac04-589c2522a695'], standard_attr_id=2588, status=DOWN, tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:04:33Z on network 96153b48-2c08-4a3f-a6d9-b9089249ef08#033[00m Nov 23 05:04:33 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.clone_index] untracking 727591b1-a1a7-487a-b7f6-b3ba2bddf956 Nov 23 05:04:33 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae/.meta.tmp' Nov 23 05:04:33 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae/.meta.tmp' to config b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae/.meta' Nov 23 05:04:33 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/b0d4ea08-4592-4af4-b78a-914919545708/.meta.tmp' Nov 23 05:04:33 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b0d4ea08-4592-4af4-b78a-914919545708/.meta.tmp' to config b'/volumes/_nogroup/b0d4ea08-4592-4af4-b78a-914919545708/.meta' Nov 23 05:04:33 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, b0d4ea08-4592-4af4-b78a-914919545708) Nov 23 05:04:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v367: 177 pgs: 177 active+clean; 225 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 197 KiB/s rd, 36 KiB/s wr, 272 op/s Nov 23 05:04:33 localhost dnsmasq[321477]: read /var/lib/neutron/dhcp/96153b48-2c08-4a3f-a6d9-b9089249ef08/addn_hosts - 2 addresses Nov 23 05:04:33 localhost podman[321533]: 2025-11-23 10:04:33.611799005 +0000 UTC m=+0.057317298 container kill 184ea9861ae410100ddc3b7ac0b4aecd85114e63cac7dd54db1ae699518ae21b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96153b48-2c08-4a3f-a6d9-b9089249ef08, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 23 05:04:33 localhost dnsmasq-dhcp[321477]: read /var/lib/neutron/dhcp/96153b48-2c08-4a3f-a6d9-b9089249ef08/host Nov 23 05:04:33 localhost dnsmasq-dhcp[321477]: read /var/lib/neutron/dhcp/96153b48-2c08-4a3f-a6d9-b9089249ef08/opts Nov 23 05:04:33 localhost nova_compute[280939]: 2025-11-23 10:04:33.670 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:33 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:33.924 262301 INFO neutron.agent.dhcp.agent [None req-85c463b2-93cf-4069-954d-57342a22ab0c - - - - - -] DHCP configuration for ports {'3a696ed6-4283-469c-885b-3ee394c144d9'} is completed#033[00m Nov 23 05:04:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e176 do_prune osdmap full prune enabled Nov 23 05:04:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e177 e177: 6 total, 6 up, 6 in Nov 23 05:04:34 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e177: 6 total, 6 up, 6 in Nov 23 05:04:34 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:34.390 2 INFO neutron.agent.securitygroups_rpc [None req-e699b8a4-5f06-487a-a300-fd9ee1a788a2 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:04:34 localhost systemd[1]: tmp-crun.vvAjMP.mount: Deactivated successfully. Nov 23 05:04:34 localhost dnsmasq[321477]: read /var/lib/neutron/dhcp/96153b48-2c08-4a3f-a6d9-b9089249ef08/addn_hosts - 1 addresses Nov 23 05:04:34 localhost dnsmasq-dhcp[321477]: read /var/lib/neutron/dhcp/96153b48-2c08-4a3f-a6d9-b9089249ef08/host Nov 23 05:04:34 localhost podman[321569]: 2025-11-23 10:04:34.644147727 +0000 UTC m=+0.069186274 container kill 184ea9861ae410100ddc3b7ac0b4aecd85114e63cac7dd54db1ae699518ae21b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96153b48-2c08-4a3f-a6d9-b9089249ef08, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 23 05:04:34 localhost dnsmasq-dhcp[321477]: read /var/lib/neutron/dhcp/96153b48-2c08-4a3f-a6d9-b9089249ef08/opts Nov 23 05:04:35 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:35.164 2 INFO neutron.agent.securitygroups_rpc [None req-519a4a04-72f9-40b4-96af-e4987b6dbb80 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:04:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e177 do_prune osdmap full prune enabled Nov 23 05:04:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e178 e178: 6 total, 6 up, 6 in Nov 23 05:04:35 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e178: 6 total, 6 up, 6 in Nov 23 05:04:35 localhost dnsmasq[321477]: read /var/lib/neutron/dhcp/96153b48-2c08-4a3f-a6d9-b9089249ef08/addn_hosts - 0 addresses Nov 23 05:04:35 localhost dnsmasq-dhcp[321477]: read /var/lib/neutron/dhcp/96153b48-2c08-4a3f-a6d9-b9089249ef08/host Nov 23 05:04:35 localhost podman[321607]: 2025-11-23 10:04:35.416019697 +0000 UTC m=+0.068736750 container kill 184ea9861ae410100ddc3b7ac0b4aecd85114e63cac7dd54db1ae699518ae21b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96153b48-2c08-4a3f-a6d9-b9089249ef08, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 05:04:35 localhost dnsmasq-dhcp[321477]: read /var/lib/neutron/dhcp/96153b48-2c08-4a3f-a6d9-b9089249ef08/opts Nov 23 05:04:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v370: 177 pgs: 177 active+clean; 225 MiB data, 976 MiB used, 41 GiB / 42 GiB avail; 152 KiB/s rd, 27 KiB/s wr, 207 op/s Nov 23 05:04:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:04:35 localhost podman[321630]: 2025-11-23 10:04:35.890917841 +0000 UTC m=+0.077357537 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container) Nov 23 05:04:35 localhost podman[321630]: 2025-11-23 10:04:35.906375688 +0000 UTC m=+0.092815354 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, vcs-type=git, io.openshift.expose-services=, version=9.6, build-date=2025-08-20T13:12:41, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public) Nov 23 05:04:35 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:04:35 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:35.964 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:04:35Z, description=, device_id=bace1285-0dd7-4599-87e6-ae783a5add31, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4971c214-76f9-4012-b6ef-0eed4208249b, ip_allocation=immediate, mac_address=fa:16:3e:4b:8a:9e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:04:12Z, description=, dns_domain=, id=98710900-4c46-474d-8df7-759682420b6f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeTest-test-network-1326351715, port_security_enabled=True, project_id=d8633d61c76748a7a900f3c8cea84ef3, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=26660, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2490, status=ACTIVE, subnets=['68080f35-61e6-44a4-86fd-8ed5851f3d8c'], tags=[], tenant_id=d8633d61c76748a7a900f3c8cea84ef3, updated_at=2025-11-23T10:04:14Z, vlan_transparent=None, network_id=98710900-4c46-474d-8df7-759682420b6f, port_security_enabled=False, project_id=d8633d61c76748a7a900f3c8cea84ef3, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2596, status=DOWN, tags=[], tenant_id=d8633d61c76748a7a900f3c8cea84ef3, updated_at=2025-11-23T10:04:35Z on network 98710900-4c46-474d-8df7-759682420b6f#033[00m Nov 23 05:04:36 localhost dnsmasq[320994]: read /var/lib/neutron/dhcp/98710900-4c46-474d-8df7-759682420b6f/addn_hosts - 1 addresses Nov 23 05:04:36 localhost dnsmasq-dhcp[320994]: read /var/lib/neutron/dhcp/98710900-4c46-474d-8df7-759682420b6f/host Nov 23 05:04:36 localhost dnsmasq-dhcp[320994]: read /var/lib/neutron/dhcp/98710900-4c46-474d-8df7-759682420b6f/opts Nov 23 05:04:36 localhost podman[321678]: 2025-11-23 10:04:36.176177267 +0000 UTC m=+0.062605572 container kill 0550ebf2ddbdac164debcf8bb83f76798cc25b25ac1ddfa1991dd1a24fc69708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-98710900-4c46-474d-8df7-759682420b6f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:04:36 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e178 do_prune osdmap full prune enabled Nov 23 05:04:36 localhost dnsmasq[321477]: exiting on receipt of SIGTERM Nov 23 05:04:36 localhost podman[321692]: 2025-11-23 10:04:36.225068204 +0000 UTC m=+0.060867197 container kill 184ea9861ae410100ddc3b7ac0b4aecd85114e63cac7dd54db1ae699518ae21b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96153b48-2c08-4a3f-a6d9-b9089249ef08, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true) Nov 23 05:04:36 localhost systemd[1]: libpod-184ea9861ae410100ddc3b7ac0b4aecd85114e63cac7dd54db1ae699518ae21b.scope: Deactivated successfully. Nov 23 05:04:36 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e179 e179: 6 total, 6 up, 6 in Nov 23 05:04:36 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e179: 6 total, 6 up, 6 in Nov 23 05:04:36 localhost podman[321709]: 2025-11-23 10:04:36.302416559 +0000 UTC m=+0.062363114 container died 184ea9861ae410100ddc3b7ac0b4aecd85114e63cac7dd54db1ae699518ae21b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96153b48-2c08-4a3f-a6d9-b9089249ef08, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 23 05:04:36 localhost podman[321709]: 2025-11-23 10:04:36.33489086 +0000 UTC m=+0.094837375 container cleanup 184ea9861ae410100ddc3b7ac0b4aecd85114e63cac7dd54db1ae699518ae21b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96153b48-2c08-4a3f-a6d9-b9089249ef08, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 05:04:36 localhost systemd[1]: libpod-conmon-184ea9861ae410100ddc3b7ac0b4aecd85114e63cac7dd54db1ae699518ae21b.scope: Deactivated successfully. Nov 23 05:04:36 localhost nova_compute[280939]: 2025-11-23 10:04:36.360 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:36 localhost podman[321712]: 2025-11-23 10:04:36.387637896 +0000 UTC m=+0.138344196 container remove 184ea9861ae410100ddc3b7ac0b4aecd85114e63cac7dd54db1ae699518ae21b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-96153b48-2c08-4a3f-a6d9-b9089249ef08, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 23 05:04:36 localhost ovn_controller[153771]: 2025-11-23T10:04:36Z|00303|binding|INFO|Releasing lport b8c1cfda-6dfa-4acd-bcd2-d6819d28d7aa from this chassis (sb_readonly=0) Nov 23 05:04:36 localhost kernel: device tapb8c1cfda-6d left promiscuous mode Nov 23 05:04:36 localhost ovn_controller[153771]: 2025-11-23T10:04:36Z|00304|binding|INFO|Setting lport b8c1cfda-6dfa-4acd-bcd2-d6819d28d7aa down in Southbound Nov 23 05:04:36 localhost nova_compute[280939]: 2025-11-23 10:04:36.400 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:36 localhost nova_compute[280939]: 2025-11-23 10:04:36.418 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:36 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:36.514 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-96153b48-2c08-4a3f-a6d9-b9089249ef08', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96153b48-2c08-4a3f-a6d9-b9089249ef08', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff24b2c1-5d71-49e0-ab46-3a3b3a7159e5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b8c1cfda-6dfa-4acd-bcd2-d6819d28d7aa) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:04:36 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:36.515 159415 INFO neutron.agent.ovn.metadata.agent [-] Port b8c1cfda-6dfa-4acd-bcd2-d6819d28d7aa in datapath 96153b48-2c08-4a3f-a6d9-b9089249ef08 unbound from our chassis#033[00m Nov 23 05:04:36 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:36.519 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 96153b48-2c08-4a3f-a6d9-b9089249ef08, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:04:36 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:36.521 262301 INFO neutron.agent.dhcp.agent [None req-00a97453-da76-4dff-8662-4b947a00ab2c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:36 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:36.521 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[9c503f29-7c4c-4a6a-8b21-2cf159b9b311]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:04:36 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:36.569 262301 INFO neutron.agent.dhcp.agent [None req-6c2059c0-ffe3-4b8b-931f-13d2de98fa2e - - - - - -] DHCP configuration for ports {'4971c214-76f9-4012-b6ef-0eed4208249b'} is completed#033[00m Nov 23 05:04:36 localhost openstack_network_exporter[241732]: ERROR 10:04:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:04:36 localhost openstack_network_exporter[241732]: ERROR 10:04:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:04:36 localhost openstack_network_exporter[241732]: ERROR 10:04:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:04:36 localhost openstack_network_exporter[241732]: ERROR 10:04:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:04:36 localhost openstack_network_exporter[241732]: Nov 23 05:04:36 localhost openstack_network_exporter[241732]: ERROR 10:04:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:04:36 localhost openstack_network_exporter[241732]: Nov 23 05:04:36 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:36.859 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:36 localhost systemd[1]: var-lib-containers-storage-overlay-406ddf4a7eafc33076f858f46f5339986839b7ff5eef27392062c18b42f34b95-merged.mount: Deactivated successfully. Nov 23 05:04:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-184ea9861ae410100ddc3b7ac0b4aecd85114e63cac7dd54db1ae699518ae21b-userdata-shm.mount: Deactivated successfully. Nov 23 05:04:36 localhost systemd[1]: run-netns-qdhcp\x2d96153b48\x2d2c08\x2d4a3f\x2da6d9\x2db9089249ef08.mount: Deactivated successfully. Nov 23 05:04:37 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:04:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:04:37 localhost nova_compute[280939]: 2025-11-23 10:04:37.204 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:37 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' Nov 23 05:04:37 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta' Nov 23 05:04:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:04:37 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "format": "json"}]: dispatch Nov 23 05:04:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:04:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:04:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:04:37 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:04:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:37.374 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:04:35Z, description=, device_id=bace1285-0dd7-4599-87e6-ae783a5add31, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4971c214-76f9-4012-b6ef-0eed4208249b, ip_allocation=immediate, mac_address=fa:16:3e:4b:8a:9e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:04:12Z, description=, dns_domain=, id=98710900-4c46-474d-8df7-759682420b6f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeTest-test-network-1326351715, port_security_enabled=True, project_id=d8633d61c76748a7a900f3c8cea84ef3, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=26660, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2490, status=ACTIVE, subnets=['68080f35-61e6-44a4-86fd-8ed5851f3d8c'], tags=[], tenant_id=d8633d61c76748a7a900f3c8cea84ef3, updated_at=2025-11-23T10:04:14Z, vlan_transparent=None, network_id=98710900-4c46-474d-8df7-759682420b6f, port_security_enabled=False, project_id=d8633d61c76748a7a900f3c8cea84ef3, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2596, status=DOWN, tags=[], tenant_id=d8633d61c76748a7a900f3c8cea84ef3, updated_at=2025-11-23T10:04:35Z on network 98710900-4c46-474d-8df7-759682420b6f#033[00m Nov 23 05:04:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v372: 177 pgs: 177 active+clean; 225 MiB data, 976 MiB used, 41 GiB / 42 GiB avail; 137 KiB/s rd, 24 KiB/s wr, 186 op/s Nov 23 05:04:37 localhost dnsmasq[320994]: read /var/lib/neutron/dhcp/98710900-4c46-474d-8df7-759682420b6f/addn_hosts - 1 addresses Nov 23 05:04:37 localhost podman[321763]: 2025-11-23 10:04:37.572386767 +0000 UTC m=+0.057506364 container kill 0550ebf2ddbdac164debcf8bb83f76798cc25b25ac1ddfa1991dd1a24fc69708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-98710900-4c46-474d-8df7-759682420b6f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 05:04:37 localhost dnsmasq-dhcp[320994]: read /var/lib/neutron/dhcp/98710900-4c46-474d-8df7-759682420b6f/host Nov 23 05:04:37 localhost dnsmasq-dhcp[320994]: read /var/lib/neutron/dhcp/98710900-4c46-474d-8df7-759682420b6f/opts Nov 23 05:04:37 localhost nova_compute[280939]: 2025-11-23 10:04:37.775 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:04:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e179 do_prune osdmap full prune enabled Nov 23 05:04:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e180 e180: 6 total, 6 up, 6 in Nov 23 05:04:38 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e180: 6 total, 6 up, 6 in Nov 23 05:04:38 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:38.286 262301 INFO neutron.agent.dhcp.agent [None req-fdf465da-378e-4448-9eea-3a57e68a2220 - - - - - -] DHCP configuration for ports {'4971c214-76f9-4012-b6ef-0eed4208249b'} is completed#033[00m Nov 23 05:04:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e180 do_prune osdmap full prune enabled Nov 23 05:04:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e181 e181: 6 total, 6 up, 6 in Nov 23 05:04:39 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e181: 6 total, 6 up, 6 in Nov 23 05:04:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v375: 177 pgs: 177 active+clean; 146 MiB data, 914 MiB used, 41 GiB / 42 GiB avail; 133 KiB/s rd, 24 KiB/s wr, 190 op/s Nov 23 05:04:40 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:40.211 2 INFO neutron.agent.securitygroups_rpc [req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a req-8f09e20d-9c7b-422f-bb6f-02a4d95509a5 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Security group member updated ['d77fc436-3ab1-42e0-a52b-861d18fcc237']#033[00m Nov 23 05:04:40 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4c1787fb-1a00-4288-a97a-0f0da441edd9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:04:40 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4c1787fb-1a00-4288-a97a-0f0da441edd9, vol_name:cephfs) < "" Nov 23 05:04:40 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Nov 23 05:04:40 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4c1787fb-1a00-4288-a97a-0f0da441edd9/.meta.tmp' Nov 23 05:04:40 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4c1787fb-1a00-4288-a97a-0f0da441edd9/.meta.tmp' to config b'/volumes/_nogroup/4c1787fb-1a00-4288-a97a-0f0da441edd9/.meta' Nov 23 05:04:40 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4c1787fb-1a00-4288-a97a-0f0da441edd9, vol_name:cephfs) < "" Nov 23 05:04:40 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4c1787fb-1a00-4288-a97a-0f0da441edd9", "format": "json"}]: dispatch Nov 23 05:04:40 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4c1787fb-1a00-4288-a97a-0f0da441edd9, vol_name:cephfs) < "" Nov 23 05:04:40 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4c1787fb-1a00-4288-a97a-0f0da441edd9, vol_name:cephfs) < "" Nov 23 05:04:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:04:40 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:04:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:04:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:04:40 localhost dnsmasq[320994]: read /var/lib/neutron/dhcp/98710900-4c46-474d-8df7-759682420b6f/addn_hosts - 0 addresses Nov 23 05:04:40 localhost dnsmasq-dhcp[320994]: read /var/lib/neutron/dhcp/98710900-4c46-474d-8df7-759682420b6f/host Nov 23 05:04:40 localhost podman[321803]: 2025-11-23 10:04:40.89766238 +0000 UTC m=+0.063602041 container kill 0550ebf2ddbdac164debcf8bb83f76798cc25b25ac1ddfa1991dd1a24fc69708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-98710900-4c46-474d-8df7-759682420b6f, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:04:40 localhost dnsmasq-dhcp[320994]: read /var/lib/neutron/dhcp/98710900-4c46-474d-8df7-759682420b6f/opts Nov 23 05:04:40 localhost podman[321802]: 2025-11-23 10:04:40.968860006 +0000 UTC m=+0.141345200 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:04:41 localhost podman[321802]: 2025-11-23 10:04:41.00856356 +0000 UTC m=+0.181048764 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:04:41 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:04:41 localhost podman[321800]: 2025-11-23 10:04:41.028203686 +0000 UTC m=+0.202962150 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible) Nov 23 05:04:41 localhost podman[321800]: 2025-11-23 10:04:41.067181248 +0000 UTC m=+0.241939692 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm) Nov 23 05:04:41 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:04:41 localhost nova_compute[280939]: 2025-11-23 10:04:41.397 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e181 do_prune osdmap full prune enabled Nov 23 05:04:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v376: 177 pgs: 177 active+clean; 146 MiB data, 914 MiB used, 41 GiB / 42 GiB avail; 95 KiB/s rd, 17 KiB/s wr, 135 op/s Nov 23 05:04:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e182 e182: 6 total, 6 up, 6 in Nov 23 05:04:41 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e182: 6 total, 6 up, 6 in Nov 23 05:04:41 localhost nova_compute[280939]: 2025-11-23 10:04:41.755 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:41 localhost ovn_controller[153771]: 2025-11-23T10:04:41Z|00305|binding|INFO|Releasing lport 3f411789-3f6d-4f18-915e-3814f4c2617f from this chassis (sb_readonly=0) Nov 23 05:04:41 localhost kernel: device tap3f411789-3f left promiscuous mode Nov 23 05:04:41 localhost ovn_controller[153771]: 2025-11-23T10:04:41Z|00306|binding|INFO|Setting lport 3f411789-3f6d-4f18-915e-3814f4c2617f down in Southbound Nov 23 05:04:41 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:41.764 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-98710900-4c46-474d-8df7-759682420b6f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98710900-4c46-474d-8df7-759682420b6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8633d61c76748a7a900f3c8cea84ef3', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d8bc8af-8f44-4505-aef8-449a6bd992f6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3f411789-3f6d-4f18-915e-3814f4c2617f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:04:41 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:41.766 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 3f411789-3f6d-4f18-915e-3814f4c2617f in datapath 98710900-4c46-474d-8df7-759682420b6f unbound from our chassis#033[00m Nov 23 05:04:41 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:41.769 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 98710900-4c46-474d-8df7-759682420b6f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:04:41 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:41.770 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[5dea9756-b4e7-42bc-aa1b-01cc20d59fcf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:04:41 localhost nova_compute[280939]: 2025-11-23 10:04:41.783 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:41 localhost nova_compute[280939]: 2025-11-23 10:04:41.784 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 05:04:41 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 05:04:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 05:04:41 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:04:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 05:04:41 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:04:41 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev c133914c-74c3-47e3-b275-eb3f67307a20 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:04:41 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev c133914c-74c3-47e3-b275-eb3f67307a20 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:04:41 localhost ceph-mgr[286671]: [progress INFO root] Completed event c133914c-74c3-47e3-b275-eb3f67307a20 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 05:04:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 05:04:41 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 05:04:42 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:42.016 262301 INFO neutron.agent.linux.ip_lib [None req-3bfba0f9-55f3-4cc3-987a-d0d34d80eaeb - - - - - -] Device tap36694c47-93 cannot be used as it has no MAC address#033[00m Nov 23 05:04:42 localhost nova_compute[280939]: 2025-11-23 10:04:42.043 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:42 localhost kernel: device tap36694c47-93 entered promiscuous mode Nov 23 05:04:42 localhost ovn_controller[153771]: 2025-11-23T10:04:42Z|00307|binding|INFO|Claiming lport 36694c47-93e6-4045-b9a7-afef939de74e for this chassis. Nov 23 05:04:42 localhost ovn_controller[153771]: 2025-11-23T10:04:42Z|00308|binding|INFO|36694c47-93e6-4045-b9a7-afef939de74e: Claiming unknown Nov 23 05:04:42 localhost nova_compute[280939]: 2025-11-23 10:04:42.052 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:42 localhost NetworkManager[5966]: [1763892282.0537] manager: (tap36694c47-93): new Generic device (/org/freedesktop/NetworkManager/Devices/52) Nov 23 05:04:42 localhost systemd-udevd[321961]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:04:42 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:42.075 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-b63c57d0-1ff8-4f4d-8c80-8dbecd341a54', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b63c57d0-1ff8-4f4d-8c80-8dbecd341a54', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=713c56e3-969d-4c1d-ab29-b6778ff13864, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=36694c47-93e6-4045-b9a7-afef939de74e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:04:42 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:42.077 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 36694c47-93e6-4045-b9a7-afef939de74e in datapath b63c57d0-1ff8-4f4d-8c80-8dbecd341a54 bound to our chassis#033[00m Nov 23 05:04:42 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:42.079 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b63c57d0-1ff8-4f4d-8c80-8dbecd341a54 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:04:42 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:42.080 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[e68cf8c0-6f69-4f82-b79b-0fb492631a85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:04:42 localhost ovn_controller[153771]: 2025-11-23T10:04:42Z|00309|binding|INFO|Setting lport 36694c47-93e6-4045-b9a7-afef939de74e ovn-installed in OVS Nov 23 05:04:42 localhost ovn_controller[153771]: 2025-11-23T10:04:42Z|00310|binding|INFO|Setting lport 36694c47-93e6-4045-b9a7-afef939de74e up in Southbound Nov 23 05:04:42 localhost nova_compute[280939]: 2025-11-23 10:04:42.087 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:42 localhost nova_compute[280939]: 2025-11-23 10:04:42.122 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:42 localhost nova_compute[280939]: 2025-11-23 10:04:42.148 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:42 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:04:42 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:04:42 localhost nova_compute[280939]: 2025-11-23 10:04:42.818 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:04:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e182 do_prune osdmap full prune enabled Nov 23 05:04:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e183 e183: 6 total, 6 up, 6 in Nov 23 05:04:43 localhost podman[322014]: Nov 23 05:04:43 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e183: 6 total, 6 up, 6 in Nov 23 05:04:43 localhost podman[322014]: 2025-11-23 10:04:43.066940089 +0000 UTC m=+0.090338877 container create ca38a1b02eead3652db99aaf1b7f28e8f66727861b4c080a113d63131d67c272 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b63c57d0-1ff8-4f4d-8c80-8dbecd341a54, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS) Nov 23 05:04:43 localhost nova_compute[280939]: 2025-11-23 10:04:43.086 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:43 localhost systemd[1]: Started libpod-conmon-ca38a1b02eead3652db99aaf1b7f28e8f66727861b4c080a113d63131d67c272.scope. Nov 23 05:04:43 localhost podman[322014]: 2025-11-23 10:04:43.019764244 +0000 UTC m=+0.043163062 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:04:43 localhost systemd[1]: tmp-crun.RJZfcI.mount: Deactivated successfully. Nov 23 05:04:43 localhost systemd[1]: Started libcrun container. Nov 23 05:04:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/847fe1dc6af0be1f53e90afa9a8d88e7db874e8f13cb78ac28e939af9f791b90/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:04:43 localhost podman[322014]: 2025-11-23 10:04:43.148276327 +0000 UTC m=+0.171675105 container init ca38a1b02eead3652db99aaf1b7f28e8f66727861b4c080a113d63131d67c272 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b63c57d0-1ff8-4f4d-8c80-8dbecd341a54, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:04:43 localhost podman[322014]: 2025-11-23 10:04:43.156432409 +0000 UTC m=+0.179831187 container start ca38a1b02eead3652db99aaf1b7f28e8f66727861b4c080a113d63131d67c272 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b63c57d0-1ff8-4f4d-8c80-8dbecd341a54, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:04:43 localhost dnsmasq[322032]: started, version 2.85 cachesize 150 Nov 23 05:04:43 localhost dnsmasq[322032]: DNS service limited to local subnets Nov 23 05:04:43 localhost dnsmasq[322032]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:04:43 localhost dnsmasq[322032]: warning: no upstream servers configured Nov 23 05:04:43 localhost dnsmasq-dhcp[322032]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:04:43 localhost dnsmasq[322032]: read /var/lib/neutron/dhcp/b63c57d0-1ff8-4f4d-8c80-8dbecd341a54/addn_hosts - 0 addresses Nov 23 05:04:43 localhost dnsmasq-dhcp[322032]: read /var/lib/neutron/dhcp/b63c57d0-1ff8-4f4d-8c80-8dbecd341a54/host Nov 23 05:04:43 localhost dnsmasq-dhcp[322032]: read /var/lib/neutron/dhcp/b63c57d0-1ff8-4f4d-8c80-8dbecd341a54/opts Nov 23 05:04:43 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:43.304 262301 INFO neutron.agent.dhcp.agent [None req-4297abef-5e7d-4a77-b6ba-97e68fcfefaa - - - - - -] DHCP configuration for ports {'254975a2-18e6-49a7-b553-57809abed95c'} is completed#033[00m Nov 23 05:04:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v379: 177 pgs: 177 active+clean; 146 MiB data, 914 MiB used, 41 GiB / 42 GiB avail; 105 KiB/s rd, 19 KiB/s wr, 149 op/s Nov 23 05:04:43 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 05:04:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 05:04:43 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:04:43 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "4c1787fb-1a00-4288-a97a-0f0da441edd9", "new_size": 2147483648, "format": "json"}]: dispatch Nov 23 05:04:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:4c1787fb-1a00-4288-a97a-0f0da441edd9, vol_name:cephfs) < "" Nov 23 05:04:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:4c1787fb-1a00-4288-a97a-0f0da441edd9, vol_name:cephfs) < "" Nov 23 05:04:44 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:04:44 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:44.981 2 INFO neutron.agent.securitygroups_rpc [None req-f2af9951-cbc0-4ee3-8964-70390f4dbee5 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:04:45 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e183 do_prune osdmap full prune enabled Nov 23 05:04:45 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e184 e184: 6 total, 6 up, 6 in Nov 23 05:04:45 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e184: 6 total, 6 up, 6 in Nov 23 05:04:45 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:45.074 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:04:44Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c92e471b-ab3e-48e4-8ff8-e93a2eb8da71, ip_allocation=immediate, mac_address=fa:16:3e:09:63:da, name=tempest-PortsTestJSON-1840185747, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:04:37Z, description=, dns_domain=, id=b63c57d0-1ff8-4f4d-8c80-8dbecd341a54, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-483674724, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=28033, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2597, status=ACTIVE, subnets=['074ea0fa-8301-47b8-a420-c9b3727e786a'], tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:04:40Z, vlan_transparent=None, network_id=b63c57d0-1ff8-4f4d-8c80-8dbecd341a54, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['cadd5356-9a8d-419a-ac04-589c2522a695'], standard_attr_id=2613, status=DOWN, tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:04:44Z on network b63c57d0-1ff8-4f4d-8c80-8dbecd341a54#033[00m Nov 23 05:04:45 localhost podman[322050]: 2025-11-23 10:04:45.306906547 +0000 UTC m=+0.057084651 container kill ca38a1b02eead3652db99aaf1b7f28e8f66727861b4c080a113d63131d67c272 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b63c57d0-1ff8-4f4d-8c80-8dbecd341a54, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3) Nov 23 05:04:45 localhost dnsmasq[322032]: read /var/lib/neutron/dhcp/b63c57d0-1ff8-4f4d-8c80-8dbecd341a54/addn_hosts - 1 addresses Nov 23 05:04:45 localhost dnsmasq-dhcp[322032]: read /var/lib/neutron/dhcp/b63c57d0-1ff8-4f4d-8c80-8dbecd341a54/host Nov 23 05:04:45 localhost dnsmasq-dhcp[322032]: read /var/lib/neutron/dhcp/b63c57d0-1ff8-4f4d-8c80-8dbecd341a54/opts Nov 23 05:04:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v381: 177 pgs: 177 active+clean; 146 MiB data, 872 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 10 KiB/s wr, 72 op/s Nov 23 05:04:45 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:45.644 262301 INFO neutron.agent.dhcp.agent [None req-1b9d5be4-e8af-4c23-b50b-8b354552fff1 - - - - - -] DHCP configuration for ports {'c92e471b-ab3e-48e4-8ff8-e93a2eb8da71'} is completed#033[00m Nov 23 05:04:45 localhost podman[322089]: 2025-11-23 10:04:45.798765343 +0000 UTC m=+0.058946908 container kill 086ad39615b59c30b293da8b5fe3b91286275ed24bce81219585d30aa0cde919 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8ce64d8c-93da-441d-aa9c-4d37e8c489ca, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2) Nov 23 05:04:45 localhost dnsmasq[321302]: exiting on receipt of SIGTERM Nov 23 05:04:45 localhost systemd[1]: libpod-086ad39615b59c30b293da8b5fe3b91286275ed24bce81219585d30aa0cde919.scope: Deactivated successfully. Nov 23 05:04:45 localhost podman[322103]: 2025-11-23 10:04:45.87455783 +0000 UTC m=+0.063178949 container died 086ad39615b59c30b293da8b5fe3b91286275ed24bce81219585d30aa0cde919 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8ce64d8c-93da-441d-aa9c-4d37e8c489ca, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:04:45 localhost podman[322103]: 2025-11-23 10:04:45.919151126 +0000 UTC m=+0.107772205 container cleanup 086ad39615b59c30b293da8b5fe3b91286275ed24bce81219585d30aa0cde919 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8ce64d8c-93da-441d-aa9c-4d37e8c489ca, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:04:45 localhost systemd[1]: libpod-conmon-086ad39615b59c30b293da8b5fe3b91286275ed24bce81219585d30aa0cde919.scope: Deactivated successfully. Nov 23 05:04:45 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:45.955 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:04:45 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:45.957 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 05:04:45 localhost podman[322105]: 2025-11-23 10:04:45.959484449 +0000 UTC m=+0.136076447 container remove 086ad39615b59c30b293da8b5fe3b91286275ed24bce81219585d30aa0cde919 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8ce64d8c-93da-441d-aa9c-4d37e8c489ca, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Nov 23 05:04:45 localhost nova_compute[280939]: 2025-11-23 10:04:45.990 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:46 localhost nova_compute[280939]: 2025-11-23 10:04:46.000 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:46 localhost kernel: device tapacc5f86d-6f left promiscuous mode Nov 23 05:04:46 localhost ovn_controller[153771]: 2025-11-23T10:04:46Z|00311|binding|INFO|Releasing lport acc5f86d-6f7b-418a-a709-1db740660938 from this chassis (sb_readonly=0) Nov 23 05:04:46 localhost ovn_controller[153771]: 2025-11-23T10:04:46Z|00312|binding|INFO|Setting lport acc5f86d-6f7b-418a-a709-1db740660938 down in Southbound Nov 23 05:04:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:46.034 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-8ce64d8c-93da-441d-aa9c-4d37e8c489ca', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8ce64d8c-93da-441d-aa9c-4d37e8c489ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8633d61c76748a7a900f3c8cea84ef3', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afb8a427-93b9-4801-bd72-1072a1ff7873, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=acc5f86d-6f7b-418a-a709-1db740660938) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:04:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:46.035 159415 INFO neutron.agent.ovn.metadata.agent [-] Port acc5f86d-6f7b-418a-a709-1db740660938 in datapath 8ce64d8c-93da-441d-aa9c-4d37e8c489ca unbound from our chassis#033[00m Nov 23 05:04:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:46.039 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8ce64d8c-93da-441d-aa9c-4d37e8c489ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:04:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:46.040 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[aa622fb1-459b-471d-976a-7bb73fbff7f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:04:46 localhost nova_compute[280939]: 2025-11-23 10:04:46.064 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:46 localhost nova_compute[280939]: 2025-11-23 10:04:46.067 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e184 do_prune osdmap full prune enabled Nov 23 05:04:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e185 e185: 6 total, 6 up, 6 in Nov 23 05:04:46 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e185: 6 total, 6 up, 6 in Nov 23 05:04:46 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:46.293 262301 INFO neutron.agent.dhcp.agent [None req-4de2ff07-4b50-4c5a-9815-5b022a86a134 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:46 localhost systemd[1]: var-lib-containers-storage-overlay-fe2e08f5054554d1b324fedecc5d6837f722ff25bf05ca6e6face6988b66898c-merged.mount: Deactivated successfully. Nov 23 05:04:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-086ad39615b59c30b293da8b5fe3b91286275ed24bce81219585d30aa0cde919-userdata-shm.mount: Deactivated successfully. Nov 23 05:04:46 localhost systemd[1]: run-netns-qdhcp\x2d8ce64d8c\x2d93da\x2d441d\x2daa9c\x2d4d37e8c489ca.mount: Deactivated successfully. Nov 23 05:04:46 localhost nova_compute[280939]: 2025-11-23 10:04:46.398 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:46 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:46.406 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:46 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:46.797 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:47 localhost podman[239764]: time="2025-11-23T10:04:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:04:47 localhost podman[239764]: @ - - [23/Nov/2025:10:04:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159964 "" "Go-http-client/1.1" Nov 23 05:04:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e185 do_prune osdmap full prune enabled Nov 23 05:04:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e186 e186: 6 total, 6 up, 6 in Nov 23 05:04:47 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e186: 6 total, 6 up, 6 in Nov 23 05:04:47 localhost podman[239764]: @ - - [23/Nov/2025:10:04:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20159 "" "Go-http-client/1.1" Nov 23 05:04:47 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4c1787fb-1a00-4288-a97a-0f0da441edd9", "format": "json"}]: dispatch Nov 23 05:04:47 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4c1787fb-1a00-4288-a97a-0f0da441edd9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:04:47 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4c1787fb-1a00-4288-a97a-0f0da441edd9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:04:47 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4c1787fb-1a00-4288-a97a-0f0da441edd9' of type subvolume Nov 23 05:04:47 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:04:47.215+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4c1787fb-1a00-4288-a97a-0f0da441edd9' of type subvolume Nov 23 05:04:47 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4c1787fb-1a00-4288-a97a-0f0da441edd9", "force": true, "format": "json"}]: dispatch Nov 23 05:04:47 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4c1787fb-1a00-4288-a97a-0f0da441edd9, vol_name:cephfs) < "" Nov 23 05:04:47 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4c1787fb-1a00-4288-a97a-0f0da441edd9'' moved to trashcan Nov 23 05:04:47 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:04:47 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4c1787fb-1a00-4288-a97a-0f0da441edd9, vol_name:cephfs) < "" Nov 23 05:04:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v384: 177 pgs: 177 active+clean; 146 MiB data, 872 MiB used, 41 GiB / 42 GiB avail; 71 KiB/s rd, 14 KiB/s wr, 98 op/s Nov 23 05:04:47 localhost nova_compute[280939]: 2025-11-23 10:04:47.864 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:04:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e186 do_prune osdmap full prune enabled Nov 23 05:04:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e187 e187: 6 total, 6 up, 6 in Nov 23 05:04:48 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e187: 6 total, 6 up, 6 in Nov 23 05:04:49 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:49.107 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:04:44Z, description=, device_id=87fce340-d1c6-420e-9c11-ea55cc980e57, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c92e471b-ab3e-48e4-8ff8-e93a2eb8da71, ip_allocation=immediate, mac_address=fa:16:3e:09:63:da, name=tempest-PortsTestJSON-1840185747, network_id=b63c57d0-1ff8-4f4d-8c80-8dbecd341a54, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['cadd5356-9a8d-419a-ac04-589c2522a695'], standard_attr_id=2613, status=ACTIVE, tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:04:46Z on network b63c57d0-1ff8-4f4d-8c80-8dbecd341a54#033[00m Nov 23 05:04:49 localhost podman[322149]: 2025-11-23 10:04:49.330238342 +0000 UTC m=+0.058782633 container kill ca38a1b02eead3652db99aaf1b7f28e8f66727861b4c080a113d63131d67c272 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b63c57d0-1ff8-4f4d-8c80-8dbecd341a54, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:04:49 localhost dnsmasq[322032]: read /var/lib/neutron/dhcp/b63c57d0-1ff8-4f4d-8c80-8dbecd341a54/addn_hosts - 1 addresses Nov 23 05:04:49 localhost dnsmasq-dhcp[322032]: read /var/lib/neutron/dhcp/b63c57d0-1ff8-4f4d-8c80-8dbecd341a54/host Nov 23 05:04:49 localhost dnsmasq-dhcp[322032]: read /var/lib/neutron/dhcp/b63c57d0-1ff8-4f4d-8c80-8dbecd341a54/opts Nov 23 05:04:49 localhost systemd[1]: tmp-crun.dXboOw.mount: Deactivated successfully. Nov 23 05:04:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v386: 177 pgs: 177 active+clean; 146 MiB data, 872 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 15 KiB/s wr, 73 op/s Nov 23 05:04:49 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:49.643 262301 INFO neutron.agent.dhcp.agent [None req-ff929733-b9ac-437a-a42e-8cd35a8ad28a - - - - - -] DHCP configuration for ports {'c92e471b-ab3e-48e4-8ff8-e93a2eb8da71'} is completed#033[00m Nov 23 05:04:49 localhost nova_compute[280939]: 2025-11-23 10:04:49.715 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:50 localhost systemd[1]: tmp-crun.g7yhwr.mount: Deactivated successfully. Nov 23 05:04:50 localhost dnsmasq[320994]: exiting on receipt of SIGTERM Nov 23 05:04:50 localhost podman[322186]: 2025-11-23 10:04:50.0131744 +0000 UTC m=+0.062810287 container kill 0550ebf2ddbdac164debcf8bb83f76798cc25b25ac1ddfa1991dd1a24fc69708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-98710900-4c46-474d-8df7-759682420b6f, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2) Nov 23 05:04:50 localhost systemd[1]: libpod-0550ebf2ddbdac164debcf8bb83f76798cc25b25ac1ddfa1991dd1a24fc69708.scope: Deactivated successfully. Nov 23 05:04:50 localhost podman[322200]: 2025-11-23 10:04:50.077776582 +0000 UTC m=+0.052541251 container died 0550ebf2ddbdac164debcf8bb83f76798cc25b25ac1ddfa1991dd1a24fc69708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-98710900-4c46-474d-8df7-759682420b6f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 23 05:04:50 localhost podman[322200]: 2025-11-23 10:04:50.108430817 +0000 UTC m=+0.083195456 container cleanup 0550ebf2ddbdac164debcf8bb83f76798cc25b25ac1ddfa1991dd1a24fc69708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-98710900-4c46-474d-8df7-759682420b6f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118) Nov 23 05:04:50 localhost systemd[1]: libpod-conmon-0550ebf2ddbdac164debcf8bb83f76798cc25b25ac1ddfa1991dd1a24fc69708.scope: Deactivated successfully. Nov 23 05:04:50 localhost podman[322202]: 2025-11-23 10:04:50.16397976 +0000 UTC m=+0.129889415 container remove 0550ebf2ddbdac164debcf8bb83f76798cc25b25ac1ddfa1991dd1a24fc69708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-98710900-4c46-474d-8df7-759682420b6f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 05:04:50 localhost systemd[1]: var-lib-containers-storage-overlay-0e913db41ba12a4abc916596825206a11e3aa29ee168cd74389b5bcb1878d17e-merged.mount: Deactivated successfully. Nov 23 05:04:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0550ebf2ddbdac164debcf8bb83f76798cc25b25ac1ddfa1991dd1a24fc69708-userdata-shm.mount: Deactivated successfully. Nov 23 05:04:50 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:50.452 262301 INFO neutron.agent.dhcp.agent [None req-d932e885-d3b4-4adf-90ef-dbabb03e2276 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:50 localhost systemd[1]: run-netns-qdhcp\x2d98710900\x2d4c46\x2d474d\x2d8df7\x2d759682420b6f.mount: Deactivated successfully. Nov 23 05:04:50 localhost dnsmasq[321062]: read /var/lib/neutron/dhcp/ec797e03-2fed-45fd-bc13-a1751e1f8db9/addn_hosts - 0 addresses Nov 23 05:04:50 localhost podman[322245]: 2025-11-23 10:04:50.474447554 +0000 UTC m=+0.066824662 container kill f9b4a23fd0dce248ff501f04a9a51f9fe57a7069142098e76f22f6686c9c95d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ec797e03-2fed-45fd-bc13-a1751e1f8db9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:04:50 localhost dnsmasq-dhcp[321062]: read /var/lib/neutron/dhcp/ec797e03-2fed-45fd-bc13-a1751e1f8db9/host Nov 23 05:04:50 localhost dnsmasq-dhcp[321062]: read /var/lib/neutron/dhcp/ec797e03-2fed-45fd-bc13-a1751e1f8db9/opts Nov 23 05:04:50 localhost systemd[1]: tmp-crun.udkOHJ.mount: Deactivated successfully. Nov 23 05:04:50 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6afacc00-ca40-4edb-aefc-a9b0a3580b7a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:04:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6afacc00-ca40-4edb-aefc-a9b0a3580b7a, vol_name:cephfs) < "" Nov 23 05:04:50 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6afacc00-ca40-4edb-aefc-a9b0a3580b7a/.meta.tmp' Nov 23 05:04:50 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6afacc00-ca40-4edb-aefc-a9b0a3580b7a/.meta.tmp' to config b'/volumes/_nogroup/6afacc00-ca40-4edb-aefc-a9b0a3580b7a/.meta' Nov 23 05:04:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6afacc00-ca40-4edb-aefc-a9b0a3580b7a, vol_name:cephfs) < "" Nov 23 05:04:50 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6afacc00-ca40-4edb-aefc-a9b0a3580b7a", "format": "json"}]: dispatch Nov 23 05:04:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6afacc00-ca40-4edb-aefc-a9b0a3580b7a, vol_name:cephfs) < "" Nov 23 05:04:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6afacc00-ca40-4edb-aefc-a9b0a3580b7a, vol_name:cephfs) < "" Nov 23 05:04:50 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:04:50 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:04:50 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:50.652 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:50 localhost nova_compute[280939]: 2025-11-23 10:04:50.700 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:50 localhost ovn_controller[153771]: 2025-11-23T10:04:50Z|00313|binding|INFO|Releasing lport 79f2471d-b1b5-4f3b-a194-04c5e4ea61a2 from this chassis (sb_readonly=0) Nov 23 05:04:50 localhost ovn_controller[153771]: 2025-11-23T10:04:50Z|00314|binding|INFO|Setting lport 79f2471d-b1b5-4f3b-a194-04c5e4ea61a2 down in Southbound Nov 23 05:04:50 localhost kernel: device tap79f2471d-b1 left promiscuous mode Nov 23 05:04:50 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:50.710 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-ec797e03-2fed-45fd-bc13-a1751e1f8db9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec797e03-2fed-45fd-bc13-a1751e1f8db9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd27ceae55c44d478998092e7554fd8a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84385636-7359-43cd-8be7-3201761b9734, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=79f2471d-b1b5-4f3b-a194-04c5e4ea61a2) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:04:50 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:50.711 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 79f2471d-b1b5-4f3b-a194-04c5e4ea61a2 in datapath ec797e03-2fed-45fd-bc13-a1751e1f8db9 unbound from our chassis#033[00m Nov 23 05:04:50 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:50.712 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ec797e03-2fed-45fd-bc13-a1751e1f8db9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:04:50 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:50.713 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[71ac63ae-e55b-462c-b6d7-c98bb62a493f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:04:50 localhost nova_compute[280939]: 2025-11-23 10:04:50.726 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:51 localhost dnsmasq[321062]: exiting on receipt of SIGTERM Nov 23 05:04:51 localhost podman[322286]: 2025-11-23 10:04:51.022044558 +0000 UTC m=+0.059207616 container kill f9b4a23fd0dce248ff501f04a9a51f9fe57a7069142098e76f22f6686c9c95d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ec797e03-2fed-45fd-bc13-a1751e1f8db9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 23 05:04:51 localhost systemd[1]: libpod-f9b4a23fd0dce248ff501f04a9a51f9fe57a7069142098e76f22f6686c9c95d0.scope: Deactivated successfully. Nov 23 05:04:51 localhost podman[322299]: 2025-11-23 10:04:51.086274759 +0000 UTC m=+0.052744158 container died f9b4a23fd0dce248ff501f04a9a51f9fe57a7069142098e76f22f6686c9c95d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ec797e03-2fed-45fd-bc13-a1751e1f8db9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Nov 23 05:04:51 localhost podman[322299]: 2025-11-23 10:04:51.114651904 +0000 UTC m=+0.081121263 container cleanup f9b4a23fd0dce248ff501f04a9a51f9fe57a7069142098e76f22f6686c9c95d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ec797e03-2fed-45fd-bc13-a1751e1f8db9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3) Nov 23 05:04:51 localhost systemd[1]: libpod-conmon-f9b4a23fd0dce248ff501f04a9a51f9fe57a7069142098e76f22f6686c9c95d0.scope: Deactivated successfully. Nov 23 05:04:51 localhost podman[322301]: 2025-11-23 10:04:51.16672731 +0000 UTC m=+0.122874820 container remove f9b4a23fd0dce248ff501f04a9a51f9fe57a7069142098e76f22f6686c9c95d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ec797e03-2fed-45fd-bc13-a1751e1f8db9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true) Nov 23 05:04:51 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:51.210 262301 INFO neutron.agent.dhcp.agent [None req-374c36e6-45a2-4f92-bea1-6760c45a1ad1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:51 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:51.229 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:51 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:51.238 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:51 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:51.244 2 INFO neutron.agent.securitygroups_rpc [None req-9f10832a-94d7-4e63-bc11-33234c92ec82 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:04:51 localhost systemd[1]: tmp-crun.FKpayY.mount: Deactivated successfully. Nov 23 05:04:51 localhost systemd[1]: var-lib-containers-storage-overlay-6a2a919793a908292ba803fb646f1099a6b7d53e44cbbfec6c30d541831dee1c-merged.mount: Deactivated successfully. Nov 23 05:04:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f9b4a23fd0dce248ff501f04a9a51f9fe57a7069142098e76f22f6686c9c95d0-userdata-shm.mount: Deactivated successfully. Nov 23 05:04:51 localhost systemd[1]: run-netns-qdhcp\x2dec797e03\x2d2fed\x2d45fd\x2dbc13\x2da1751e1f8db9.mount: Deactivated successfully. Nov 23 05:04:51 localhost nova_compute[280939]: 2025-11-23 10:04:51.399 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:51 localhost dnsmasq[322032]: read /var/lib/neutron/dhcp/b63c57d0-1ff8-4f4d-8c80-8dbecd341a54/addn_hosts - 0 addresses Nov 23 05:04:51 localhost dnsmasq-dhcp[322032]: read /var/lib/neutron/dhcp/b63c57d0-1ff8-4f4d-8c80-8dbecd341a54/host Nov 23 05:04:51 localhost podman[322343]: 2025-11-23 10:04:51.470489655 +0000 UTC m=+0.043993197 container kill ca38a1b02eead3652db99aaf1b7f28e8f66727861b4c080a113d63131d67c272 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b63c57d0-1ff8-4f4d-8c80-8dbecd341a54, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 05:04:51 localhost dnsmasq-dhcp[322032]: read /var/lib/neutron/dhcp/b63c57d0-1ff8-4f4d-8c80-8dbecd341a54/opts Nov 23 05:04:51 localhost nova_compute[280939]: 2025-11-23 10:04:51.484 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v387: 177 pgs: 177 active+clean; 146 MiB data, 872 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 11 KiB/s wr, 53 op/s Nov 23 05:04:51 localhost nova_compute[280939]: 2025-11-23 10:04:51.637 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:51 localhost ovn_controller[153771]: 2025-11-23T10:04:51Z|00315|binding|INFO|Releasing lport 36694c47-93e6-4045-b9a7-afef939de74e from this chassis (sb_readonly=0) Nov 23 05:04:51 localhost ovn_controller[153771]: 2025-11-23T10:04:51Z|00316|binding|INFO|Setting lport 36694c47-93e6-4045-b9a7-afef939de74e down in Southbound Nov 23 05:04:51 localhost kernel: device tap36694c47-93 left promiscuous mode Nov 23 05:04:51 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:51.648 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-b63c57d0-1ff8-4f4d-8c80-8dbecd341a54', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b63c57d0-1ff8-4f4d-8c80-8dbecd341a54', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=713c56e3-969d-4c1d-ab29-b6778ff13864, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=36694c47-93e6-4045-b9a7-afef939de74e) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:04:51 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:51.651 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 36694c47-93e6-4045-b9a7-afef939de74e in datapath b63c57d0-1ff8-4f4d-8c80-8dbecd341a54 unbound from our chassis#033[00m Nov 23 05:04:51 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:51.654 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b63c57d0-1ff8-4f4d-8c80-8dbecd341a54, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:04:51 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:51.655 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[6f648531-cac9-40ca-9d3f-0e502a880525]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:04:51 localhost nova_compute[280939]: 2025-11-23 10:04:51.658 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:52 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8cc6eb2d-54d1-40d5-92a9-2068282def74", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:04:52 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8cc6eb2d-54d1-40d5-92a9-2068282def74, vol_name:cephfs) < "" Nov 23 05:04:52 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8cc6eb2d-54d1-40d5-92a9-2068282def74/.meta.tmp' Nov 23 05:04:52 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8cc6eb2d-54d1-40d5-92a9-2068282def74/.meta.tmp' to config b'/volumes/_nogroup/8cc6eb2d-54d1-40d5-92a9-2068282def74/.meta' Nov 23 05:04:52 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8cc6eb2d-54d1-40d5-92a9-2068282def74, vol_name:cephfs) < "" Nov 23 05:04:52 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8cc6eb2d-54d1-40d5-92a9-2068282def74", "format": "json"}]: dispatch Nov 23 05:04:52 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8cc6eb2d-54d1-40d5-92a9-2068282def74, vol_name:cephfs) < "" Nov 23 05:04:52 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8cc6eb2d-54d1-40d5-92a9-2068282def74, vol_name:cephfs) < "" Nov 23 05:04:52 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:04:52 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:04:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:04:52 localhost nova_compute[280939]: 2025-11-23 10:04:52.867 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:52 localhost systemd[1]: tmp-crun.w3pzGD.mount: Deactivated successfully. Nov 23 05:04:52 localhost podman[322366]: 2025-11-23 10:04:52.897065033 +0000 UTC m=+0.084093894 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true) Nov 23 05:04:52 localhost podman[322366]: 2025-11-23 10:04:52.905746711 +0000 UTC m=+0.092775552 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 05:04:52 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:04:52 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:52.959 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 05:04:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:04:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e187 do_prune osdmap full prune enabled Nov 23 05:04:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e188 e188: 6 total, 6 up, 6 in Nov 23 05:04:53 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e188: 6 total, 6 up, 6 in Nov 23 05:04:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:04:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:04:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:04:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:04:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:04:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:04:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v389: 177 pgs: 177 active+clean; 146 MiB data, 872 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 11 KiB/s wr, 50 op/s Nov 23 05:04:53 localhost dnsmasq[322032]: exiting on receipt of SIGTERM Nov 23 05:04:53 localhost podman[322400]: 2025-11-23 10:04:53.539096409 +0000 UTC m=+0.058279238 container kill ca38a1b02eead3652db99aaf1b7f28e8f66727861b4c080a113d63131d67c272 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b63c57d0-1ff8-4f4d-8c80-8dbecd341a54, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:04:53 localhost systemd[1]: libpod-ca38a1b02eead3652db99aaf1b7f28e8f66727861b4c080a113d63131d67c272.scope: Deactivated successfully. Nov 23 05:04:53 localhost podman[322413]: 2025-11-23 10:04:53.610623714 +0000 UTC m=+0.060098384 container died ca38a1b02eead3652db99aaf1b7f28e8f66727861b4c080a113d63131d67c272 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b63c57d0-1ff8-4f4d-8c80-8dbecd341a54, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:04:53 localhost podman[322413]: 2025-11-23 10:04:53.639516206 +0000 UTC m=+0.088990806 container cleanup ca38a1b02eead3652db99aaf1b7f28e8f66727861b4c080a113d63131d67c272 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b63c57d0-1ff8-4f4d-8c80-8dbecd341a54, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:04:53 localhost systemd[1]: libpod-conmon-ca38a1b02eead3652db99aaf1b7f28e8f66727861b4c080a113d63131d67c272.scope: Deactivated successfully. Nov 23 05:04:53 localhost podman[322420]: 2025-11-23 10:04:53.682523651 +0000 UTC m=+0.118462873 container remove ca38a1b02eead3652db99aaf1b7f28e8f66727861b4c080a113d63131d67c272 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b63c57d0-1ff8-4f4d-8c80-8dbecd341a54, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:04:53 localhost systemd[1]: var-lib-containers-storage-overlay-847fe1dc6af0be1f53e90afa9a8d88e7db874e8f13cb78ac28e939af9f791b90-merged.mount: Deactivated successfully. Nov 23 05:04:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ca38a1b02eead3652db99aaf1b7f28e8f66727861b4c080a113d63131d67c272-userdata-shm.mount: Deactivated successfully. Nov 23 05:04:54 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:54.005 262301 INFO neutron.agent.dhcp.agent [None req-61d0ac32-16f8-4877-97ce-505fa316bb6f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:54 localhost systemd[1]: run-netns-qdhcp\x2db63c57d0\x2d1ff8\x2d4f4d\x2d8c80\x2d8dbecd341a54.mount: Deactivated successfully. Nov 23 05:04:54 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:54.006 262301 INFO neutron.agent.dhcp.agent [None req-61d0ac32-16f8-4877-97ce-505fa316bb6f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:54 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "6afacc00-ca40-4edb-aefc-a9b0a3580b7a", "new_size": 2147483648, "format": "json"}]: dispatch Nov 23 05:04:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:6afacc00-ca40-4edb-aefc-a9b0a3580b7a, vol_name:cephfs) < "" Nov 23 05:04:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:6afacc00-ca40-4edb-aefc-a9b0a3580b7a, vol_name:cephfs) < "" Nov 23 05:04:54 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:54.248 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:04:54 localhost nova_compute[280939]: 2025-11-23 10:04:54.441 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v390: 177 pgs: 177 active+clean; 146 MiB data, 872 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 19 KiB/s wr, 43 op/s Nov 23 05:04:56 localhost nova_compute[280939]: 2025-11-23 10:04:56.434 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:56 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:56.561 2 INFO neutron.agent.securitygroups_rpc [None req-b2e8c328-efaf-49d2-9816-397d8d6e979c 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['213f9d65-3629-4053-acee-7e99a128b417']#033[00m Nov 23 05:04:56 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:56.667 262301 INFO neutron.agent.linux.ip_lib [None req-bc389255-7eb8-4f8f-9bc9-15dd20629304 - - - - - -] Device tapb4806d52-f1 cannot be used as it has no MAC address#033[00m Nov 23 05:04:56 localhost nova_compute[280939]: 2025-11-23 10:04:56.691 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:56 localhost kernel: device tapb4806d52-f1 entered promiscuous mode Nov 23 05:04:56 localhost NetworkManager[5966]: [1763892296.7011] manager: (tapb4806d52-f1): new Generic device (/org/freedesktop/NetworkManager/Devices/53) Nov 23 05:04:56 localhost ovn_controller[153771]: 2025-11-23T10:04:56Z|00317|binding|INFO|Claiming lport b4806d52-f1ff-4cac-9a58-aa275784b4c6 for this chassis. Nov 23 05:04:56 localhost nova_compute[280939]: 2025-11-23 10:04:56.701 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:56 localhost ovn_controller[153771]: 2025-11-23T10:04:56Z|00318|binding|INFO|b4806d52-f1ff-4cac-9a58-aa275784b4c6: Claiming unknown Nov 23 05:04:56 localhost systemd-udevd[322451]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:04:56 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:56.722 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5143a1d3-f63b-452c-a57f-85c07d0974c0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=b4806d52-f1ff-4cac-9a58-aa275784b4c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:04:56 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:56.725 159415 INFO neutron.agent.ovn.metadata.agent [-] Port b4806d52-f1ff-4cac-9a58-aa275784b4c6 in datapath 76e6f4ab-630a-4c73-a560-1e6a5fffbdbe bound to our chassis#033[00m Nov 23 05:04:56 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:56.727 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port 3f155f82-1518-43d2-a279-cb0b0bdae4b7 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 05:04:56 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:56.728 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:04:56 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:56.728 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[a8cf4c37-e3f1-41a7-93ae-c1792e34abc5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:04:56 localhost journal[229336]: ethtool ioctl error on tapb4806d52-f1: No such device Nov 23 05:04:56 localhost journal[229336]: ethtool ioctl error on tapb4806d52-f1: No such device Nov 23 05:04:56 localhost nova_compute[280939]: 2025-11-23 10:04:56.743 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:56 localhost ovn_controller[153771]: 2025-11-23T10:04:56Z|00319|binding|INFO|Setting lport b4806d52-f1ff-4cac-9a58-aa275784b4c6 ovn-installed in OVS Nov 23 05:04:56 localhost ovn_controller[153771]: 2025-11-23T10:04:56Z|00320|binding|INFO|Setting lport b4806d52-f1ff-4cac-9a58-aa275784b4c6 up in Southbound Nov 23 05:04:56 localhost journal[229336]: ethtool ioctl error on tapb4806d52-f1: No such device Nov 23 05:04:56 localhost nova_compute[280939]: 2025-11-23 10:04:56.748 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:56 localhost journal[229336]: ethtool ioctl error on tapb4806d52-f1: No such device Nov 23 05:04:56 localhost journal[229336]: ethtool ioctl error on tapb4806d52-f1: No such device Nov 23 05:04:56 localhost journal[229336]: ethtool ioctl error on tapb4806d52-f1: No such device Nov 23 05:04:56 localhost journal[229336]: ethtool ioctl error on tapb4806d52-f1: No such device Nov 23 05:04:56 localhost journal[229336]: ethtool ioctl error on tapb4806d52-f1: No such device Nov 23 05:04:56 localhost nova_compute[280939]: 2025-11-23 10:04:56.782 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:56 localhost nova_compute[280939]: 2025-11-23 10:04:56.813 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:57 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6afacc00-ca40-4edb-aefc-a9b0a3580b7a", "format": "json"}]: dispatch Nov 23 05:04:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6afacc00-ca40-4edb-aefc-a9b0a3580b7a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:04:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6afacc00-ca40-4edb-aefc-a9b0a3580b7a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:04:57 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6afacc00-ca40-4edb-aefc-a9b0a3580b7a' of type subvolume Nov 23 05:04:57 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:04:57.464+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6afacc00-ca40-4edb-aefc-a9b0a3580b7a' of type subvolume Nov 23 05:04:57 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6afacc00-ca40-4edb-aefc-a9b0a3580b7a", "force": true, "format": "json"}]: dispatch Nov 23 05:04:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6afacc00-ca40-4edb-aefc-a9b0a3580b7a, vol_name:cephfs) < "" Nov 23 05:04:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6afacc00-ca40-4edb-aefc-a9b0a3580b7a'' moved to trashcan Nov 23 05:04:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:04:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6afacc00-ca40-4edb-aefc-a9b0a3580b7a, vol_name:cephfs) < "" Nov 23 05:04:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v391: 177 pgs: 177 active+clean; 146 MiB data, 872 MiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 16 KiB/s wr, 37 op/s Nov 23 05:04:57 localhost podman[322522]: Nov 23 05:04:57 localhost podman[322522]: 2025-11-23 10:04:57.664468461 +0000 UTC m=+0.085585590 container create d8d859279bdce83750f94900ae9655296551e1ce7f0063b528fe221b979e6e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 05:04:57 localhost systemd[1]: Started libpod-conmon-d8d859279bdce83750f94900ae9655296551e1ce7f0063b528fe221b979e6e34.scope. Nov 23 05:04:57 localhost podman[322522]: 2025-11-23 10:04:57.623453487 +0000 UTC m=+0.044570676 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:04:57 localhost systemd[1]: Started libcrun container. Nov 23 05:04:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23585fffbb4775c3ee3ef49feae7b5a8300d15b0294596abf7b619a25d751a91/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:04:57 localhost podman[322522]: 2025-11-23 10:04:57.738642218 +0000 UTC m=+0.159759367 container init d8d859279bdce83750f94900ae9655296551e1ce7f0063b528fe221b979e6e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 05:04:57 localhost podman[322522]: 2025-11-23 10:04:57.747038167 +0000 UTC m=+0.168155316 container start d8d859279bdce83750f94900ae9655296551e1ce7f0063b528fe221b979e6e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:04:57 localhost dnsmasq[322541]: started, version 2.85 cachesize 150 Nov 23 05:04:57 localhost dnsmasq[322541]: DNS service limited to local subnets Nov 23 05:04:57 localhost dnsmasq[322541]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:04:57 localhost dnsmasq[322541]: warning: no upstream servers configured Nov 23 05:04:57 localhost dnsmasq-dhcp[322541]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:04:57 localhost dnsmasq[322541]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/addn_hosts - 0 addresses Nov 23 05:04:57 localhost dnsmasq-dhcp[322541]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/host Nov 23 05:04:57 localhost dnsmasq-dhcp[322541]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/opts Nov 23 05:04:57 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:57.771 2 INFO neutron.agent.securitygroups_rpc [None req-d7ea7df2-10a0-4360-bfff-447d012be880 6f11688a49fb4deba83327b1cf6539b4 02d402d01a514bbd8ec5543d8bb9b97c - - default default] Security group rule updated ['76c5df30-fcbd-4316-84a0-0d549c3af78d']#033[00m Nov 23 05:04:57 localhost nova_compute[280939]: 2025-11-23 10:04:57.908 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:04:58 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:58.014 262301 INFO neutron.agent.dhcp.agent [None req-b559df3c-939d-483a-b297-9d92e20681fa - - - - - -] DHCP configuration for ports {'d5a5bf10-b94f-4270-9b8b-f5b33fff78ea', '8a35e26b-9b4b-466d-abd2-63f4f475d1c8'} is completed#033[00m Nov 23 05:04:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:04:58 localhost dnsmasq[322541]: exiting on receipt of SIGTERM Nov 23 05:04:58 localhost podman[322559]: 2025-11-23 10:04:58.143017637 +0000 UTC m=+0.056803623 container kill d8d859279bdce83750f94900ae9655296551e1ce7f0063b528fe221b979e6e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:04:58 localhost systemd[1]: libpod-d8d859279bdce83750f94900ae9655296551e1ce7f0063b528fe221b979e6e34.scope: Deactivated successfully. Nov 23 05:04:58 localhost podman[322571]: 2025-11-23 10:04:58.210160687 +0000 UTC m=+0.055444421 container died d8d859279bdce83750f94900ae9655296551e1ce7f0063b528fe221b979e6e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:04:58 localhost podman[322571]: 2025-11-23 10:04:58.242597818 +0000 UTC m=+0.087881522 container cleanup d8d859279bdce83750f94900ae9655296551e1ce7f0063b528fe221b979e6e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 05:04:58 localhost systemd[1]: libpod-conmon-d8d859279bdce83750f94900ae9655296551e1ce7f0063b528fe221b979e6e34.scope: Deactivated successfully. Nov 23 05:04:58 localhost podman[322573]: 2025-11-23 10:04:58.295020373 +0000 UTC m=+0.131635370 container remove d8d859279bdce83750f94900ae9655296551e1ce7f0063b528fe221b979e6e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0) Nov 23 05:04:58 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:58.583 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:d7:84 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5143a1d3-f63b-452c-a57f-85c07d0974c0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d5a5bf10-b94f-4270-9b8b-f5b33fff78ea) old=Port_Binding(mac=['fa:16:3e:bb:d7:84 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:04:58 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:58.585 159415 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d5a5bf10-b94f-4270-9b8b-f5b33fff78ea in datapath 76e6f4ab-630a-4c73-a560-1e6a5fffbdbe updated#033[00m Nov 23 05:04:58 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:58.587 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port 3f155f82-1518-43d2-a279-cb0b0bdae4b7 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 05:04:58 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:58.588 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:04:58 localhost ovn_metadata_agent[159410]: 2025-11-23 10:04:58.588 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[15a0ff8a-0d7d-40c3-85b0-cfe33a53a114]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:04:58 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8cc6eb2d-54d1-40d5-92a9-2068282def74", "format": "json"}]: dispatch Nov 23 05:04:58 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8cc6eb2d-54d1-40d5-92a9-2068282def74, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:04:58 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8cc6eb2d-54d1-40d5-92a9-2068282def74, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:04:58 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8cc6eb2d-54d1-40d5-92a9-2068282def74' of type subvolume Nov 23 05:04:58 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:04:58.644+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8cc6eb2d-54d1-40d5-92a9-2068282def74' of type subvolume Nov 23 05:04:58 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8cc6eb2d-54d1-40d5-92a9-2068282def74", "force": true, "format": "json"}]: dispatch Nov 23 05:04:58 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8cc6eb2d-54d1-40d5-92a9-2068282def74, vol_name:cephfs) < "" Nov 23 05:04:58 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8cc6eb2d-54d1-40d5-92a9-2068282def74'' moved to trashcan Nov 23 05:04:58 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:04:58 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8cc6eb2d-54d1-40d5-92a9-2068282def74, vol_name:cephfs) < "" Nov 23 05:04:58 localhost systemd[1]: var-lib-containers-storage-overlay-23585fffbb4775c3ee3ef49feae7b5a8300d15b0294596abf7b619a25d751a91-merged.mount: Deactivated successfully. Nov 23 05:04:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8d859279bdce83750f94900ae9655296551e1ce7f0063b528fe221b979e6e34-userdata-shm.mount: Deactivated successfully. Nov 23 05:04:59 localhost neutron_sriov_agent[255165]: 2025-11-23 10:04:59.307 2 INFO neutron.agent.securitygroups_rpc [None req-2900c02d-4bae-4668-a3d0-a31f6942bf81 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['213f9d65-3629-4053-acee-7e99a128b417', '05c9de82-0c74-49cb-8524-43dd3dd47f37']#033[00m Nov 23 05:04:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v392: 177 pgs: 177 active+clean; 146 MiB data, 873 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 12 op/s Nov 23 05:04:59 localhost podman[322648]: Nov 23 05:04:59 localhost podman[322648]: 2025-11-23 10:04:59.668235406 +0000 UTC m=+0.086446307 container create dd7ee59453a6dee2a2439a823ae247374df9fd96660584148fb39b3d9feff708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Nov 23 05:04:59 localhost systemd[1]: Started libpod-conmon-dd7ee59453a6dee2a2439a823ae247374df9fd96660584148fb39b3d9feff708.scope. Nov 23 05:04:59 localhost systemd[1]: Started libcrun container. Nov 23 05:04:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a691b16054a56f8058fa767e3b7a69fa0733b8b05402bc481cc51fcc66fe5b33/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:04:59 localhost podman[322648]: 2025-11-23 10:04:59.627214921 +0000 UTC m=+0.045425852 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:04:59 localhost podman[322648]: 2025-11-23 10:04:59.727818523 +0000 UTC m=+0.146029434 container init dd7ee59453a6dee2a2439a823ae247374df9fd96660584148fb39b3d9feff708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:04:59 localhost podman[322648]: 2025-11-23 10:04:59.737721418 +0000 UTC m=+0.155932319 container start dd7ee59453a6dee2a2439a823ae247374df9fd96660584148fb39b3d9feff708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 23 05:04:59 localhost dnsmasq[322667]: started, version 2.85 cachesize 150 Nov 23 05:04:59 localhost dnsmasq[322667]: DNS service limited to local subnets Nov 23 05:04:59 localhost dnsmasq[322667]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:04:59 localhost dnsmasq[322667]: warning: no upstream servers configured Nov 23 05:04:59 localhost dnsmasq-dhcp[322667]: DHCP, static leases only on 10.100.0.16, lease time 1d Nov 23 05:04:59 localhost dnsmasq-dhcp[322667]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:04:59 localhost dnsmasq[322667]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/addn_hosts - 1 addresses Nov 23 05:04:59 localhost dnsmasq-dhcp[322667]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/host Nov 23 05:04:59 localhost dnsmasq-dhcp[322667]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/opts Nov 23 05:04:59 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:04:59.790 262301 INFO neutron.agent.dhcp.agent [None req-e8fb2f64-a238-44b9-bdae-d08055a5bbe1 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:04:56Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ba0cd4ce-0f79-432e-aebc-f006e1cb60a3, ip_allocation=immediate, mac_address=fa:16:3e:22:ef:c8, name=tempest-PortsTestJSON-405037892, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:39Z, description=, dns_domain=, id=76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-test-network-1134627380, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=2094, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2324, status=ACTIVE, subnets=['0a40b8e4-3f91-48e8-a742-c9f301f76fb7'], tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:04:54Z, vlan_transparent=None, network_id=76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['213f9d65-3629-4053-acee-7e99a128b417'], standard_attr_id=2649, status=DOWN, tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:04:56Z on network 76e6f4ab-630a-4c73-a560-1e6a5fffbdbe#033[00m Nov 23 05:05:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:05:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:05:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:05:00 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:00.278 2 INFO neutron.agent.securitygroups_rpc [None req-31700e8d-00a6-42b0-834e-1388eab5f28c 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['05c9de82-0c74-49cb-8524-43dd3dd47f37']#033[00m Nov 23 05:05:00 localhost podman[322681]: 2025-11-23 10:05:00.324565143 +0000 UTC m=+0.094917738 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd) Nov 23 05:05:00 localhost podman[322682]: 2025-11-23 10:05:00.379815227 +0000 UTC m=+0.146600912 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 05:05:00 localhost podman[322682]: 2025-11-23 10:05:00.391407114 +0000 UTC m=+0.158192839 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:05:00 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:05:00 localhost podman[322681]: 2025-11-23 10:05:00.408407029 +0000 UTC m=+0.178759664 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true) Nov 23 05:05:00 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:05:00 localhost dnsmasq[322667]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/addn_hosts - 1 addresses Nov 23 05:05:00 localhost dnsmasq-dhcp[322667]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/host Nov 23 05:05:00 localhost dnsmasq-dhcp[322667]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/opts Nov 23 05:05:00 localhost podman[322712]: 2025-11-23 10:05:00.445620036 +0000 UTC m=+0.156794916 container kill dd7ee59453a6dee2a2439a823ae247374df9fd96660584148fb39b3d9feff708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 23 05:05:00 localhost podman[322683]: 2025-11-23 10:05:00.486716583 +0000 UTC m=+0.244941634 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:05:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:00.549 262301 INFO neutron.agent.dhcp.agent [None req-bd51d817-455a-4063-afe1-3a65e5afffc5 - - - - - -] DHCP configuration for ports {'ba0cd4ce-0f79-432e-aebc-f006e1cb60a3', 'b4806d52-f1ff-4cac-9a58-aa275784b4c6', 'd5a5bf10-b94f-4270-9b8b-f5b33fff78ea', '8a35e26b-9b4b-466d-abd2-63f4f475d1c8'} is completed#033[00m Nov 23 05:05:00 localhost podman[322683]: 2025-11-23 10:05:00.596496468 +0000 UTC m=+0.354721479 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, container_name=ovn_controller) Nov 23 05:05:00 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:05:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:00.639 262301 INFO neutron.agent.dhcp.agent [None req-fa09ee35-4233-4206-ab20-b044119b3718 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:04:56Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ba0cd4ce-0f79-432e-aebc-f006e1cb60a3, ip_allocation=immediate, mac_address=fa:16:3e:22:ef:c8, name=tempest-PortsTestJSON-514602532, network_id=76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['05c9de82-0c74-49cb-8524-43dd3dd47f37'], standard_attr_id=2649, status=DOWN, tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:04:59Z on network 76e6f4ab-630a-4c73-a560-1e6a5fffbdbe#033[00m Nov 23 05:05:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:00.643 262301 INFO oslo.privsep.daemon [None req-fa09ee35-4233-4206-ab20-b044119b3718 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.dhcp_release_cmd', '--privsep_sock_path', '/tmp/tmpkcfforaa/privsep.sock']#033[00m Nov 23 05:05:00 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:00.663 262301 INFO neutron.agent.dhcp.agent [None req-273e6be4-1c9f-44b0-8790-85dffd05d744 - - - - - -] DHCP configuration for ports {'ba0cd4ce-0f79-432e-aebc-f006e1cb60a3'} is completed#033[00m Nov 23 05:05:00 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9f8bde9a-0660-4024-80ab-8799aae7b4e4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:05:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9f8bde9a-0660-4024-80ab-8799aae7b4e4, vol_name:cephfs) < "" Nov 23 05:05:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9f8bde9a-0660-4024-80ab-8799aae7b4e4/.meta.tmp' Nov 23 05:05:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9f8bde9a-0660-4024-80ab-8799aae7b4e4/.meta.tmp' to config b'/volumes/_nogroup/9f8bde9a-0660-4024-80ab-8799aae7b4e4/.meta' Nov 23 05:05:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9f8bde9a-0660-4024-80ab-8799aae7b4e4, vol_name:cephfs) < "" Nov 23 05:05:00 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9f8bde9a-0660-4024-80ab-8799aae7b4e4", "format": "json"}]: dispatch Nov 23 05:05:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9f8bde9a-0660-4024-80ab-8799aae7b4e4, vol_name:cephfs) < "" Nov 23 05:05:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9f8bde9a-0660-4024-80ab-8799aae7b4e4, vol_name:cephfs) < "" Nov 23 05:05:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:05:00 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:05:01 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:01.258 262301 INFO oslo.privsep.daemon [None req-fa09ee35-4233-4206-ab20-b044119b3718 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Nov 23 05:05:01 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:01.150 322775 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Nov 23 05:05:01 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:01.155 322775 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Nov 23 05:05:01 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:01.158 322775 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Nov 23 05:05:01 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:01.159 322775 INFO oslo.privsep.daemon [-] privsep daemon running as pid 322775#033[00m Nov 23 05:05:01 localhost nova_compute[280939]: 2025-11-23 10:05:01.437 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v393: 177 pgs: 177 active+clean; 146 MiB data, 873 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 12 op/s Nov 23 05:05:01 localhost dnsmasq-dhcp[322667]: DHCPRELEASE(tapb4806d52-f1) 10.100.0.4 fa:16:3e:22:ef:c8 Nov 23 05:05:02 localhost dnsmasq[322667]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/addn_hosts - 1 addresses Nov 23 05:05:02 localhost dnsmasq-dhcp[322667]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/host Nov 23 05:05:02 localhost dnsmasq-dhcp[322667]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/opts Nov 23 05:05:02 localhost podman[322796]: 2025-11-23 10:05:02.140101454 +0000 UTC m=+0.056802192 container kill dd7ee59453a6dee2a2439a823ae247374df9fd96660584148fb39b3d9feff708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:05:02 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:02.413 262301 INFO neutron.agent.dhcp.agent [None req-a7b5a524-3a0d-4ebc-8161-d5df94799969 - - - - - -] DHCP configuration for ports {'ba0cd4ce-0f79-432e-aebc-f006e1cb60a3'} is completed#033[00m Nov 23 05:05:02 localhost dnsmasq[322667]: exiting on receipt of SIGTERM Nov 23 05:05:02 localhost systemd[1]: tmp-crun.phx4Un.mount: Deactivated successfully. Nov 23 05:05:02 localhost podman[322835]: 2025-11-23 10:05:02.69615914 +0000 UTC m=+0.063651354 container kill dd7ee59453a6dee2a2439a823ae247374df9fd96660584148fb39b3d9feff708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true) Nov 23 05:05:02 localhost systemd[1]: libpod-dd7ee59453a6dee2a2439a823ae247374df9fd96660584148fb39b3d9feff708.scope: Deactivated successfully. Nov 23 05:05:02 localhost podman[322849]: 2025-11-23 10:05:02.767077967 +0000 UTC m=+0.054437001 container died dd7ee59453a6dee2a2439a823ae247374df9fd96660584148fb39b3d9feff708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 05:05:02 localhost podman[322849]: 2025-11-23 10:05:02.794771441 +0000 UTC m=+0.082130425 container cleanup dd7ee59453a6dee2a2439a823ae247374df9fd96660584148fb39b3d9feff708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:05:02 localhost systemd[1]: libpod-conmon-dd7ee59453a6dee2a2439a823ae247374df9fd96660584148fb39b3d9feff708.scope: Deactivated successfully. Nov 23 05:05:02 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:02.803 2 INFO neutron.agent.securitygroups_rpc [None req-2a25adbb-406f-4488-9290-d86f8fa25b90 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['83b9eb37-6d54-417f-b8aa-c3bd6525a15a']#033[00m Nov 23 05:05:02 localhost podman[322850]: 2025-11-23 10:05:02.845060351 +0000 UTC m=+0.124338765 container remove dd7ee59453a6dee2a2439a823ae247374df9fd96660584148fb39b3d9feff708 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 05:05:02 localhost nova_compute[280939]: 2025-11-23 10:05:02.943 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:05:03 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:03.111 2 INFO neutron.agent.securitygroups_rpc [None req-48af5a2d-b1ea-400e-a467-c239dff497de 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['83b9eb37-6d54-417f-b8aa-c3bd6525a15a']#033[00m Nov 23 05:05:03 localhost systemd[1]: var-lib-containers-storage-overlay-a691b16054a56f8058fa767e3b7a69fa0733b8b05402bc481cc51fcc66fe5b33-merged.mount: Deactivated successfully. Nov 23 05:05:03 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dd7ee59453a6dee2a2439a823ae247374df9fd96660584148fb39b3d9feff708-userdata-shm.mount: Deactivated successfully. Nov 23 05:05:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v394: 177 pgs: 177 active+clean; 146 MiB data, 873 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 12 op/s Nov 23 05:05:03 localhost podman[322926]: Nov 23 05:05:03 localhost podman[322926]: 2025-11-23 10:05:03.689790577 +0000 UTC m=+0.085675522 container create c6cc73cfce98ba6f449ff289b2065a701a35edaf9956566e0a539cf0cb3265cd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true) Nov 23 05:05:03 localhost systemd[1]: Started libpod-conmon-c6cc73cfce98ba6f449ff289b2065a701a35edaf9956566e0a539cf0cb3265cd.scope. Nov 23 05:05:03 localhost systemd[1]: Started libcrun container. Nov 23 05:05:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e577374ed0e8f89de09ec30bda0e47ff123e4bb1117769289dc0de074640be6c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:05:03 localhost podman[322926]: 2025-11-23 10:05:03.647620987 +0000 UTC m=+0.043505972 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:05:03 localhost podman[322926]: 2025-11-23 10:05:03.752295266 +0000 UTC m=+0.148180201 container init c6cc73cfce98ba6f449ff289b2065a701a35edaf9956566e0a539cf0cb3265cd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 23 05:05:03 localhost podman[322926]: 2025-11-23 10:05:03.762466438 +0000 UTC m=+0.158351373 container start c6cc73cfce98ba6f449ff289b2065a701a35edaf9956566e0a539cf0cb3265cd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:05:03 localhost dnsmasq[322944]: started, version 2.85 cachesize 150 Nov 23 05:05:03 localhost dnsmasq[322944]: DNS service limited to local subnets Nov 23 05:05:03 localhost dnsmasq[322944]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:05:03 localhost dnsmasq[322944]: warning: no upstream servers configured Nov 23 05:05:03 localhost dnsmasq-dhcp[322944]: DHCP, static leases only on 10.100.0.16, lease time 1d Nov 23 05:05:03 localhost dnsmasq[322944]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/addn_hosts - 0 addresses Nov 23 05:05:03 localhost dnsmasq-dhcp[322944]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/host Nov 23 05:05:03 localhost dnsmasq-dhcp[322944]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/opts Nov 23 05:05:04 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:04.070 262301 INFO neutron.agent.dhcp.agent [None req-cdf9057d-5278-4292-a15d-0a63c101d6c1 - - - - - -] DHCP configuration for ports {'b4806d52-f1ff-4cac-9a58-aa275784b4c6', 'd5a5bf10-b94f-4270-9b8b-f5b33fff78ea', '8a35e26b-9b4b-466d-abd2-63f4f475d1c8'} is completed#033[00m Nov 23 05:05:04 localhost dnsmasq[322944]: exiting on receipt of SIGTERM Nov 23 05:05:04 localhost podman[322962]: 2025-11-23 10:05:04.081884297 +0000 UTC m=+0.049429014 container kill c6cc73cfce98ba6f449ff289b2065a701a35edaf9956566e0a539cf0cb3265cd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:05:04 localhost systemd[1]: libpod-c6cc73cfce98ba6f449ff289b2065a701a35edaf9956566e0a539cf0cb3265cd.scope: Deactivated successfully. Nov 23 05:05:04 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9f8bde9a-0660-4024-80ab-8799aae7b4e4", "format": "json"}]: dispatch Nov 23 05:05:04 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9f8bde9a-0660-4024-80ab-8799aae7b4e4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:04 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9f8bde9a-0660-4024-80ab-8799aae7b4e4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:04 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9f8bde9a-0660-4024-80ab-8799aae7b4e4' of type subvolume Nov 23 05:05:04 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:05:04.137+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9f8bde9a-0660-4024-80ab-8799aae7b4e4' of type subvolume Nov 23 05:05:04 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9f8bde9a-0660-4024-80ab-8799aae7b4e4", "force": true, "format": "json"}]: dispatch Nov 23 05:05:04 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9f8bde9a-0660-4024-80ab-8799aae7b4e4, vol_name:cephfs) < "" Nov 23 05:05:04 localhost podman[322976]: 2025-11-23 10:05:04.154920259 +0000 UTC m=+0.061111805 container died c6cc73cfce98ba6f449ff289b2065a701a35edaf9956566e0a539cf0cb3265cd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0) Nov 23 05:05:04 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9f8bde9a-0660-4024-80ab-8799aae7b4e4'' moved to trashcan Nov 23 05:05:04 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:05:04 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9f8bde9a-0660-4024-80ab-8799aae7b4e4, vol_name:cephfs) < "" Nov 23 05:05:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6cc73cfce98ba6f449ff289b2065a701a35edaf9956566e0a539cf0cb3265cd-userdata-shm.mount: Deactivated successfully. Nov 23 05:05:04 localhost podman[322976]: 2025-11-23 10:05:04.191411094 +0000 UTC m=+0.097602610 container cleanup c6cc73cfce98ba6f449ff289b2065a701a35edaf9956566e0a539cf0cb3265cd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:05:04 localhost systemd[1]: libpod-conmon-c6cc73cfce98ba6f449ff289b2065a701a35edaf9956566e0a539cf0cb3265cd.scope: Deactivated successfully. Nov 23 05:05:04 localhost podman[322983]: 2025-11-23 10:05:04.264177228 +0000 UTC m=+0.153104881 container remove c6cc73cfce98ba6f449ff289b2065a701a35edaf9956566e0a539cf0cb3265cd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118) Nov 23 05:05:04 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:04.273 2 INFO neutron.agent.securitygroups_rpc [None req-068f4094-be4e-499c-ac72-326e6af4f870 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m Nov 23 05:05:04 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:04.537 2 INFO neutron.agent.securitygroups_rpc [None req-5e059f36-e08b-42c4-9c06-8e7d2c8a7a35 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m Nov 23 05:05:04 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:04.738 2 INFO neutron.agent.securitygroups_rpc [None req-4232445f-3fdb-4ab4-af76-a3c636901057 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m Nov 23 05:05:04 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:04.974 2 INFO neutron.agent.securitygroups_rpc [None req-bdf55031-0050-45a1-bc2b-6230ff544fa3 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m Nov 23 05:05:05 localhost systemd[1]: var-lib-containers-storage-overlay-e577374ed0e8f89de09ec30bda0e47ff123e4bb1117769289dc0de074640be6c-merged.mount: Deactivated successfully. Nov 23 05:05:05 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:05.146 2 INFO neutron.agent.securitygroups_rpc [None req-efdb9984-7352-4eb4-bfb4-32226524bf47 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m Nov 23 05:05:05 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:05.485 2 INFO neutron.agent.securitygroups_rpc [None req-2d5ed31f-c175-4e95-8ecb-2dfb6c38fae5 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m Nov 23 05:05:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v395: 177 pgs: 177 active+clean; 192 MiB data, 935 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 46 op/s Nov 23 05:05:05 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:05.542 2 INFO neutron.agent.securitygroups_rpc [None req-f12b045a-e1e3-435c-8190-d496fdcf5f2d 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['d1cc26af-765b-45fa-b447-8d13d7399069']#033[00m Nov 23 05:05:05 localhost podman[323050]: Nov 23 05:05:05 localhost podman[323050]: 2025-11-23 10:05:05.584018565 +0000 UTC m=+0.092061819 container create 1e029786db45b95eb44aca342c3c7eb5415a25fef006abe637b7d1c8a84b8112 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:05:05 localhost systemd[1]: Started libpod-conmon-1e029786db45b95eb44aca342c3c7eb5415a25fef006abe637b7d1c8a84b8112.scope. Nov 23 05:05:05 localhost podman[323050]: 2025-11-23 10:05:05.537405188 +0000 UTC m=+0.045448432 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:05:05 localhost systemd[1]: Started libcrun container. Nov 23 05:05:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200cd87fe250a6de5b7548bd0eaf94d9188c35cecb44e046a675ea7c0052b822/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:05:05 localhost podman[323050]: 2025-11-23 10:05:05.654571571 +0000 UTC m=+0.162614795 container init 1e029786db45b95eb44aca342c3c7eb5415a25fef006abe637b7d1c8a84b8112 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:05:05 localhost podman[323050]: 2025-11-23 10:05:05.662778884 +0000 UTC m=+0.170822098 container start 1e029786db45b95eb44aca342c3c7eb5415a25fef006abe637b7d1c8a84b8112 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3) Nov 23 05:05:05 localhost dnsmasq[323068]: started, version 2.85 cachesize 150 Nov 23 05:05:05 localhost dnsmasq[323068]: DNS service limited to local subnets Nov 23 05:05:05 localhost dnsmasq[323068]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:05:05 localhost dnsmasq[323068]: warning: no upstream servers configured Nov 23 05:05:05 localhost dnsmasq-dhcp[323068]: DHCP, static leases only on 10.100.0.16, lease time 1d Nov 23 05:05:05 localhost dnsmasq-dhcp[323068]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:05:05 localhost dnsmasq[323068]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/addn_hosts - 0 addresses Nov 23 05:05:05 localhost dnsmasq-dhcp[323068]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/host Nov 23 05:05:05 localhost dnsmasq-dhcp[323068]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/opts Nov 23 05:05:05 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:05.720 262301 INFO neutron.agent.dhcp.agent [None req-d856b3e5-bf8e-4f83-adb8-b6374ca12e40 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:05:05Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=54216e39-b156-483a-a7e0-1ebbdb12d489, ip_allocation=immediate, mac_address=fa:16:3e:65:53:eb, name=tempest-PortsTestJSON-41468692, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:39Z, description=, dns_domain=, id=76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-test-network-1134627380, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=2094, qos_policy_id=None, revision_number=5, router:external=False, shared=False, standard_attr_id=2324, status=ACTIVE, subnets=['0eb61080-2ba7-4dee-aa1d-4e4c2145da92', '80ba7135-9996-4e39-9ad4-76c0dbb1ee5d'], tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:05:02Z, vlan_transparent=None, network_id=76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d1cc26af-765b-45fa-b447-8d13d7399069'], standard_attr_id=2732, status=DOWN, tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:05:05Z on network 76e6f4ab-630a-4c73-a560-1e6a5fffbdbe#033[00m Nov 23 05:05:05 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:05.928 262301 INFO neutron.agent.dhcp.agent [None req-28766ae3-590d-445b-be26-2315d13d7162 - - - - - -] DHCP configuration for ports {'b4806d52-f1ff-4cac-9a58-aa275784b4c6', 'd5a5bf10-b94f-4270-9b8b-f5b33fff78ea', '8a35e26b-9b4b-466d-abd2-63f4f475d1c8'} is completed#033[00m Nov 23 05:05:05 localhost dnsmasq[323068]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/addn_hosts - 1 addresses Nov 23 05:05:05 localhost dnsmasq-dhcp[323068]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/host Nov 23 05:05:05 localhost dnsmasq-dhcp[323068]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/opts Nov 23 05:05:05 localhost podman[323086]: 2025-11-23 10:05:05.965622042 +0000 UTC m=+0.060308741 container kill 1e029786db45b95eb44aca342c3c7eb5415a25fef006abe637b7d1c8a84b8112 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2) Nov 23 05:05:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:05:06 localhost systemd[1]: tmp-crun.KBLtHX.mount: Deactivated successfully. Nov 23 05:05:06 localhost podman[323103]: 2025-11-23 10:05:06.155057792 +0000 UTC m=+0.087249891 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 05:05:06 localhost podman[323103]: 2025-11-23 10:05:06.194377965 +0000 UTC m=+0.126570024 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_id=edpm, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64) Nov 23 05:05:06 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:05:06 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:06.207 262301 INFO neutron.agent.dhcp.agent [None req-2ba3bf87-1055-4acc-b34f-0617de654509 - - - - - -] DHCP configuration for ports {'54216e39-b156-483a-a7e0-1ebbdb12d489'} is completed#033[00m Nov 23 05:05:06 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:06.326 2 INFO neutron.agent.securitygroups_rpc [None req-b7a801b4-8c85-482c-af67-8b6642b94666 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m Nov 23 05:05:06 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e188 do_prune osdmap full prune enabled Nov 23 05:05:06 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e189 e189: 6 total, 6 up, 6 in Nov 23 05:05:06 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e189: 6 total, 6 up, 6 in Nov 23 05:05:06 localhost nova_compute[280939]: 2025-11-23 10:05:06.481 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:06 localhost openstack_network_exporter[241732]: ERROR 10:05:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:05:06 localhost openstack_network_exporter[241732]: ERROR 10:05:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:05:06 localhost openstack_network_exporter[241732]: ERROR 10:05:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:05:06 localhost openstack_network_exporter[241732]: ERROR 10:05:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:05:06 localhost openstack_network_exporter[241732]: Nov 23 05:05:06 localhost openstack_network_exporter[241732]: ERROR 10:05:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:05:06 localhost openstack_network_exporter[241732]: Nov 23 05:05:06 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:06.862 2 INFO neutron.agent.securitygroups_rpc [None req-0eca1fa9-2850-4efd-aa96-731301b95192 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m Nov 23 05:05:07 localhost dnsmasq[323068]: exiting on receipt of SIGTERM Nov 23 05:05:07 localhost systemd[1]: libpod-1e029786db45b95eb44aca342c3c7eb5415a25fef006abe637b7d1c8a84b8112.scope: Deactivated successfully. Nov 23 05:05:07 localhost podman[323140]: 2025-11-23 10:05:07.018092483 +0000 UTC m=+0.061331202 container kill 1e029786db45b95eb44aca342c3c7eb5415a25fef006abe637b7d1c8a84b8112 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:05:07 localhost podman[323154]: 2025-11-23 10:05:07.085060688 +0000 UTC m=+0.054136130 container died 1e029786db45b95eb44aca342c3c7eb5415a25fef006abe637b7d1c8a84b8112 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:05:07 localhost podman[323154]: 2025-11-23 10:05:07.116561859 +0000 UTC m=+0.085637251 container cleanup 1e029786db45b95eb44aca342c3c7eb5415a25fef006abe637b7d1c8a84b8112 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:05:07 localhost systemd[1]: libpod-conmon-1e029786db45b95eb44aca342c3c7eb5415a25fef006abe637b7d1c8a84b8112.scope: Deactivated successfully. Nov 23 05:05:07 localhost systemd[1]: var-lib-containers-storage-overlay-200cd87fe250a6de5b7548bd0eaf94d9188c35cecb44e046a675ea7c0052b822-merged.mount: Deactivated successfully. Nov 23 05:05:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1e029786db45b95eb44aca342c3c7eb5415a25fef006abe637b7d1c8a84b8112-userdata-shm.mount: Deactivated successfully. Nov 23 05:05:07 localhost podman[323156]: 2025-11-23 10:05:07.16197632 +0000 UTC m=+0.124882412 container remove 1e029786db45b95eb44aca342c3c7eb5415a25fef006abe637b7d1c8a84b8112 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:05:07 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:07.203 2 INFO neutron.agent.securitygroups_rpc [None req-b76074b6-0a53-43ed-86f9-a06ad1dd7bfb 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m Nov 23 05:05:07 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e662a31b-8c47-43e3-92e1-6d96aeb393b7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:05:07 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e662a31b-8c47-43e3-92e1-6d96aeb393b7, vol_name:cephfs) < "" Nov 23 05:05:07 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e662a31b-8c47-43e3-92e1-6d96aeb393b7/.meta.tmp' Nov 23 05:05:07 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e662a31b-8c47-43e3-92e1-6d96aeb393b7/.meta.tmp' to config b'/volumes/_nogroup/e662a31b-8c47-43e3-92e1-6d96aeb393b7/.meta' Nov 23 05:05:07 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e662a31b-8c47-43e3-92e1-6d96aeb393b7, vol_name:cephfs) < "" Nov 23 05:05:07 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e662a31b-8c47-43e3-92e1-6d96aeb393b7", "format": "json"}]: dispatch Nov 23 05:05:07 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e662a31b-8c47-43e3-92e1-6d96aeb393b7, vol_name:cephfs) < "" Nov 23 05:05:07 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e662a31b-8c47-43e3-92e1-6d96aeb393b7, vol_name:cephfs) < "" Nov 23 05:05:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:05:07 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:05:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v397: 177 pgs: 177 active+clean; 192 MiB data, 935 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 53 op/s Nov 23 05:05:07 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:07.581 2 INFO neutron.agent.securitygroups_rpc [None req-5025f2b5-3c8e-4a04-9825-a17ba040ab51 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m Nov 23 05:05:07 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:07.794 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:d7:84 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5143a1d3-f63b-452c-a57f-85c07d0974c0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d5a5bf10-b94f-4270-9b8b-f5b33fff78ea) old=Port_Binding(mac=['fa:16:3e:bb:d7:84 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:05:07 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:07.796 159415 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d5a5bf10-b94f-4270-9b8b-f5b33fff78ea in datapath 76e6f4ab-630a-4c73-a560-1e6a5fffbdbe updated#033[00m Nov 23 05:05:07 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:07.798 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port 3f155f82-1518-43d2-a279-cb0b0bdae4b7 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 05:05:07 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:07.799 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:05:07 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:07.800 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[30ee14c1-e704-40e1-ad09-c2e6e0b5636a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:05:07 localhost nova_compute[280939]: 2025-11-23 10:05:07.986 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:05:08 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:08.551 2 INFO neutron.agent.securitygroups_rpc [None req-b418740f-9100-4b44-8438-3f54c0de85da 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['ec91c804-f6c3-4a65-9ba5-93d7c528c909']#033[00m Nov 23 05:05:08 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:08.597 2 INFO neutron.agent.securitygroups_rpc [None req-d6be62a7-1074-4184-82e5-b6c7eb9c713f 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['d1cc26af-765b-45fa-b447-8d13d7399069', 'b62406ca-1ad0-471f-83b3-a7b86cb40552', '5041c083-f562-4221-8bac-acacd7a21e13']#033[00m Nov 23 05:05:09 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:09.082 2 INFO neutron.agent.securitygroups_rpc [None req-7eb231ba-0d20-4837-8fea-3273f5df7e61 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['b62406ca-1ad0-471f-83b3-a7b86cb40552', '5041c083-f562-4221-8bac-acacd7a21e13']#033[00m Nov 23 05:05:09 localhost podman[323231]: Nov 23 05:05:09 localhost podman[323231]: 2025-11-23 10:05:09.445041557 +0000 UTC m=+0.090130261 container create 97755cc54ac6d21de15f5cc4bdbf530402036d839172056b4eff47365a8d89be (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:05:09 localhost systemd[1]: Started libpod-conmon-97755cc54ac6d21de15f5cc4bdbf530402036d839172056b4eff47365a8d89be.scope. Nov 23 05:05:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v398: 177 pgs: 177 active+clean; 192 MiB data, 937 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 2.1 MiB/s wr, 93 op/s Nov 23 05:05:09 localhost systemd[1]: tmp-crun.HUh04d.mount: Deactivated successfully. Nov 23 05:05:09 localhost podman[323231]: 2025-11-23 10:05:09.400277657 +0000 UTC m=+0.045366391 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:05:09 localhost systemd[1]: Started libcrun container. Nov 23 05:05:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ea27cb14410812157fc7dd185468850a75f9f5076da2dd9636a09864ad29ec5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:05:09 localhost podman[323231]: 2025-11-23 10:05:09.557612018 +0000 UTC m=+0.202700702 container init 97755cc54ac6d21de15f5cc4bdbf530402036d839172056b4eff47365a8d89be (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Nov 23 05:05:09 localhost podman[323231]: 2025-11-23 10:05:09.567438061 +0000 UTC m=+0.212526745 container start 97755cc54ac6d21de15f5cc4bdbf530402036d839172056b4eff47365a8d89be (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:05:09 localhost dnsmasq[323250]: started, version 2.85 cachesize 150 Nov 23 05:05:09 localhost dnsmasq[323250]: DNS service limited to local subnets Nov 23 05:05:09 localhost dnsmasq[323250]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:05:09 localhost dnsmasq[323250]: warning: no upstream servers configured Nov 23 05:05:09 localhost dnsmasq-dhcp[323250]: DHCP, static leases only on 10.100.0.16, lease time 1d Nov 23 05:05:09 localhost dnsmasq-dhcp[323250]: DHCP, static leases only on 10.100.0.32, lease time 1d Nov 23 05:05:09 localhost dnsmasq-dhcp[323250]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:05:09 localhost dnsmasq[323250]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/addn_hosts - 1 addresses Nov 23 05:05:09 localhost dnsmasq-dhcp[323250]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/host Nov 23 05:05:09 localhost dnsmasq-dhcp[323250]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/opts Nov 23 05:05:09 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:09.620 262301 INFO neutron.agent.dhcp.agent [None req-fe64d813-389c-4510-a145-b1a5aa33e532 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:05:05Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=54216e39-b156-483a-a7e0-1ebbdb12d489, ip_allocation=immediate, mac_address=fa:16:3e:65:53:eb, name=tempest-PortsTestJSON-1675674325, network_id=76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['5041c083-f562-4221-8bac-acacd7a21e13', 'b62406ca-1ad0-471f-83b3-a7b86cb40552'], standard_attr_id=2732, status=DOWN, tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:05:08Z on network 76e6f4ab-630a-4c73-a560-1e6a5fffbdbe#033[00m Nov 23 05:05:09 localhost dnsmasq-dhcp[323250]: DHCPRELEASE(tapb4806d52-f1) 10.100.0.10 fa:16:3e:65:53:eb Nov 23 05:05:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:09.745 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:05:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:09.747 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:05:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:09.747 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:05:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:05:09 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/531532703' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:05:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:05:09 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/531532703' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:05:09 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:09.840 262301 INFO neutron.agent.dhcp.agent [None req-c0b54aab-6807-40f1-97d6-b3155a04fdbb - - - - - -] DHCP configuration for ports {'8a35e26b-9b4b-466d-abd2-63f4f475d1c8', 'b4806d52-f1ff-4cac-9a58-aa275784b4c6', 'd5a5bf10-b94f-4270-9b8b-f5b33fff78ea', '54216e39-b156-483a-a7e0-1ebbdb12d489'} is completed#033[00m Nov 23 05:05:10 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:10.051 2 INFO neutron.agent.securitygroups_rpc [None req-dc86cddf-8096-4f55-8994-fc155127f219 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['5975f3c0-fffa-4893-9c5f-a50728456ba3']#033[00m Nov 23 05:05:10 localhost dnsmasq[323250]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/addn_hosts - 1 addresses Nov 23 05:05:10 localhost podman[323269]: 2025-11-23 10:05:10.206427153 +0000 UTC m=+0.057752441 container kill 97755cc54ac6d21de15f5cc4bdbf530402036d839172056b4eff47365a8d89be (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 23 05:05:10 localhost dnsmasq-dhcp[323250]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/host Nov 23 05:05:10 localhost dnsmasq-dhcp[323250]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/opts Nov 23 05:05:10 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:10.390 2 INFO neutron.agent.securitygroups_rpc [None req-6f895a1f-dcef-4309-9299-e6c0da113106 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['5975f3c0-fffa-4893-9c5f-a50728456ba3']#033[00m Nov 23 05:05:10 localhost systemd[1]: tmp-crun.Q2wjFj.mount: Deactivated successfully. Nov 23 05:05:10 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:10.469 262301 INFO neutron.agent.dhcp.agent [None req-db7369bb-bef5-403b-8c0d-7306bb2bd77e - - - - - -] DHCP configuration for ports {'54216e39-b156-483a-a7e0-1ebbdb12d489'} is completed#033[00m Nov 23 05:05:10 localhost dnsmasq[323250]: exiting on receipt of SIGTERM Nov 23 05:05:10 localhost podman[323308]: 2025-11-23 10:05:10.637154135 +0000 UTC m=+0.059486845 container kill 97755cc54ac6d21de15f5cc4bdbf530402036d839172056b4eff47365a8d89be (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:05:10 localhost systemd[1]: libpod-97755cc54ac6d21de15f5cc4bdbf530402036d839172056b4eff47365a8d89be.scope: Deactivated successfully. Nov 23 05:05:10 localhost podman[323321]: 2025-11-23 10:05:10.705706469 +0000 UTC m=+0.057341500 container died 97755cc54ac6d21de15f5cc4bdbf530402036d839172056b4eff47365a8d89be (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:05:10 localhost podman[323321]: 2025-11-23 10:05:10.730799713 +0000 UTC m=+0.082434704 container cleanup 97755cc54ac6d21de15f5cc4bdbf530402036d839172056b4eff47365a8d89be (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Nov 23 05:05:10 localhost systemd[1]: libpod-conmon-97755cc54ac6d21de15f5cc4bdbf530402036d839172056b4eff47365a8d89be.scope: Deactivated successfully. Nov 23 05:05:10 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e662a31b-8c47-43e3-92e1-6d96aeb393b7", "format": "json"}]: dispatch Nov 23 05:05:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e662a31b-8c47-43e3-92e1-6d96aeb393b7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e662a31b-8c47-43e3-92e1-6d96aeb393b7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:10 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e662a31b-8c47-43e3-92e1-6d96aeb393b7' of type subvolume Nov 23 05:05:10 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:05:10.743+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e662a31b-8c47-43e3-92e1-6d96aeb393b7' of type subvolume Nov 23 05:05:10 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e662a31b-8c47-43e3-92e1-6d96aeb393b7", "force": true, "format": "json"}]: dispatch Nov 23 05:05:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e662a31b-8c47-43e3-92e1-6d96aeb393b7, vol_name:cephfs) < "" Nov 23 05:05:10 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e662a31b-8c47-43e3-92e1-6d96aeb393b7'' moved to trashcan Nov 23 05:05:10 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:05:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e662a31b-8c47-43e3-92e1-6d96aeb393b7, vol_name:cephfs) < "" Nov 23 05:05:10 localhost podman[323323]: 2025-11-23 10:05:10.782236038 +0000 UTC m=+0.125421728 container remove 97755cc54ac6d21de15f5cc4bdbf530402036d839172056b4eff47365a8d89be (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:05:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:05:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:05:11 localhost podman[323377]: 2025-11-23 10:05:11.394236239 +0000 UTC m=+0.078787851 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 23 05:05:11 localhost systemd[1]: var-lib-containers-storage-overlay-6ea27cb14410812157fc7dd185468850a75f9f5076da2dd9636a09864ad29ec5-merged.mount: Deactivated successfully. Nov 23 05:05:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97755cc54ac6d21de15f5cc4bdbf530402036d839172056b4eff47365a8d89be-userdata-shm.mount: Deactivated successfully. Nov 23 05:05:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v399: 177 pgs: 177 active+clean; 192 MiB data, 937 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 2.1 MiB/s wr, 93 op/s Nov 23 05:05:11 localhost nova_compute[280939]: 2025-11-23 10:05:11.531 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:11 localhost podman[323377]: 2025-11-23 10:05:11.596478875 +0000 UTC m=+0.281030537 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:05:11 localhost podman[323378]: 2025-11-23 10:05:11.614289264 +0000 UTC m=+0.296071990 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 05:05:11 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:05:11 localhost podman[323378]: 2025-11-23 10:05:11.620667961 +0000 UTC m=+0.302450617 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:05:11 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:05:11 localhost podman[323440]: Nov 23 05:05:11 localhost podman[323440]: 2025-11-23 10:05:11.780720556 +0000 UTC m=+0.092108921 container create 8e007e8e727b618b3e9d9b8db0508fa171e3c29c57b7d77a7c6c55af34b833f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:05:11 localhost systemd[1]: Started libpod-conmon-8e007e8e727b618b3e9d9b8db0508fa171e3c29c57b7d77a7c6c55af34b833f1.scope. Nov 23 05:05:11 localhost podman[323440]: 2025-11-23 10:05:11.737549195 +0000 UTC m=+0.048937590 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:05:11 localhost systemd[1]: Started libcrun container. Nov 23 05:05:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea3fee4d7bf34606106afd40021cf2030df0e8c56ecad6e41e5cb2ca4d05ae7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:05:11 localhost podman[323440]: 2025-11-23 10:05:11.854616515 +0000 UTC m=+0.166004930 container init 8e007e8e727b618b3e9d9b8db0508fa171e3c29c57b7d77a7c6c55af34b833f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:05:11 localhost podman[323440]: 2025-11-23 10:05:11.863529919 +0000 UTC m=+0.174918294 container start 8e007e8e727b618b3e9d9b8db0508fa171e3c29c57b7d77a7c6c55af34b833f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:05:11 localhost dnsmasq[323458]: started, version 2.85 cachesize 150 Nov 23 05:05:11 localhost dnsmasq[323458]: DNS service limited to local subnets Nov 23 05:05:11 localhost dnsmasq[323458]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:05:11 localhost dnsmasq[323458]: warning: no upstream servers configured Nov 23 05:05:11 localhost dnsmasq-dhcp[323458]: DHCP, static leases only on 10.100.0.16, lease time 1d Nov 23 05:05:11 localhost dnsmasq-dhcp[323458]: DHCP, static leases only on 10.100.0.32, lease time 1d Nov 23 05:05:11 localhost dnsmasq[323458]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/addn_hosts - 0 addresses Nov 23 05:05:11 localhost dnsmasq-dhcp[323458]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/host Nov 23 05:05:11 localhost dnsmasq-dhcp[323458]: read /var/lib/neutron/dhcp/76e6f4ab-630a-4c73-a560-1e6a5fffbdbe/opts Nov 23 05:05:11 localhost nova_compute[280939]: 2025-11-23 10:05:11.904 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:05:11 localhost nova_compute[280939]: 2025-11-23 10:05:11.905 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:05:12 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:12.275 2 INFO neutron.agent.securitygroups_rpc [None req-9a1758e4-5d44-475c-9640-9981332a110e 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['2786fa44-4779-49f0-84bb-2a9d4bed5cef']#033[00m Nov 23 05:05:12 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:12.342 262301 INFO neutron.agent.dhcp.agent [None req-459f18a7-18cd-4e0d-b41c-222c08d37c75 - - - - - -] DHCP configuration for ports {'b4806d52-f1ff-4cac-9a58-aa275784b4c6', 'd5a5bf10-b94f-4270-9b8b-f5b33fff78ea', '8a35e26b-9b4b-466d-abd2-63f4f475d1c8'} is completed#033[00m Nov 23 05:05:12 localhost dnsmasq[323458]: exiting on receipt of SIGTERM Nov 23 05:05:12 localhost podman[323476]: 2025-11-23 10:05:12.431615667 +0000 UTC m=+0.058535246 container kill 8e007e8e727b618b3e9d9b8db0508fa171e3c29c57b7d77a7c6c55af34b833f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0) Nov 23 05:05:12 localhost systemd[1]: libpod-8e007e8e727b618b3e9d9b8db0508fa171e3c29c57b7d77a7c6c55af34b833f1.scope: Deactivated successfully. Nov 23 05:05:12 localhost podman[323490]: 2025-11-23 10:05:12.509381595 +0000 UTC m=+0.062324884 container died 8e007e8e727b618b3e9d9b8db0508fa171e3c29c57b7d77a7c6c55af34b833f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 23 05:05:12 localhost podman[323490]: 2025-11-23 10:05:12.543672812 +0000 UTC m=+0.096616071 container cleanup 8e007e8e727b618b3e9d9b8db0508fa171e3c29c57b7d77a7c6c55af34b833f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0) Nov 23 05:05:12 localhost systemd[1]: libpod-conmon-8e007e8e727b618b3e9d9b8db0508fa171e3c29c57b7d77a7c6c55af34b833f1.scope: Deactivated successfully. Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e189 do_prune osdmap full prune enabled Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.585 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost podman[323492]: 2025-11-23 10:05:12.582225391 +0000 UTC m=+0.125735568 container remove 8e007e8e727b618b3e9d9b8db0508fa171e3c29c57b7d77a7c6c55af34b833f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.585 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.586 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.586 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.587 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.587 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.587 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.588 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.588 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.588 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.589 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:05:12.589 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:05:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e190 e190: 6 total, 6 up, 6 in Nov 23 05:05:12 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e190: 6 total, 6 up, 6 in Nov 23 05:05:12 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:12.602 2 INFO neutron.agent.securitygroups_rpc [None req-52408a28-3173-4dbe-afae-862affbfdc2f 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m Nov 23 05:05:12 localhost nova_compute[280939]: 2025-11-23 10:05:12.628 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:12 localhost ovn_controller[153771]: 2025-11-23T10:05:12Z|00321|binding|INFO|Releasing lport b4806d52-f1ff-4cac-9a58-aa275784b4c6 from this chassis (sb_readonly=0) Nov 23 05:05:12 localhost ovn_controller[153771]: 2025-11-23T10:05:12Z|00322|binding|INFO|Setting lport b4806d52-f1ff-4cac-9a58-aa275784b4c6 down in Southbound Nov 23 05:05:12 localhost kernel: device tapb4806d52-f1 left promiscuous mode Nov 23 05:05:12 localhost nova_compute[280939]: 2025-11-23 10:05:12.647 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:12 localhost nova_compute[280939]: 2025-11-23 10:05:12.648 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:12 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:12.649 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.3/28 10.100.0.35/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5143a1d3-f63b-452c-a57f-85c07d0974c0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=b4806d52-f1ff-4cac-9a58-aa275784b4c6) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:05:12 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:12.651 159415 INFO neutron.agent.ovn.metadata.agent [-] Port b4806d52-f1ff-4cac-9a58-aa275784b4c6 in datapath 76e6f4ab-630a-4c73-a560-1e6a5fffbdbe unbound from our chassis#033[00m Nov 23 05:05:12 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:12.654 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:05:12 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:12.655 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[83ed1a00-2300-410a-835e-694c97473451]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:05:12 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:12.795 2 INFO neutron.agent.securitygroups_rpc [None req-4dab38c7-a234-4065-b71c-d9440ee4c0cc 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['2786fa44-4779-49f0-84bb-2a9d4bed5cef']#033[00m Nov 23 05:05:12 localhost nova_compute[280939]: 2025-11-23 10:05:12.987 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:13 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:13.057 262301 INFO neutron.agent.dhcp.agent [None req-53cca515-33f7-48ce-8364-0f2b2846998b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:05:13 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:13.058 262301 INFO neutron.agent.dhcp.agent [None req-53cca515-33f7-48ce-8364-0f2b2846998b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:05:13 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:13.058 262301 INFO neutron.agent.dhcp.agent [None req-53cca515-33f7-48ce-8364-0f2b2846998b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:05:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:05:13 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:13.355 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:05:13 localhost systemd[1]: var-lib-containers-storage-overlay-1ea3fee4d7bf34606106afd40021cf2030df0e8c56ecad6e41e5cb2ca4d05ae7-merged.mount: Deactivated successfully. Nov 23 05:05:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8e007e8e727b618b3e9d9b8db0508fa171e3c29c57b7d77a7c6c55af34b833f1-userdata-shm.mount: Deactivated successfully. Nov 23 05:05:13 localhost systemd[1]: run-netns-qdhcp\x2d76e6f4ab\x2d630a\x2d4c73\x2da560\x2d1e6a5fffbdbe.mount: Deactivated successfully. Nov 23 05:05:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v401: 177 pgs: 177 active+clean; 192 MiB data, 937 MiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 9.7 KiB/s wr, 62 op/s Nov 23 05:05:13 localhost nova_compute[280939]: 2025-11-23 10:05:13.744 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:14 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:14.054 2 INFO neutron.agent.securitygroups_rpc [None req-01487171-b358-4961-aa9c-b003fa4396a5 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['d1c18a84-65e4-4a8f-9dbe-dec1719991b1']#033[00m Nov 23 05:05:14 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "f1a4d19f-7421-4f15-9ed3-2d6971368042", "format": "json"}]: dispatch Nov 23 05:05:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f1a4d19f-7421-4f15-9ed3-2d6971368042, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:14 localhost nova_compute[280939]: 2025-11-23 10:05:14.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:05:14 localhost nova_compute[280939]: 2025-11-23 10:05:14.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 05:05:14 localhost nova_compute[280939]: 2025-11-23 10:05:14.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 05:05:14 localhost nova_compute[280939]: 2025-11-23 10:05:14.150 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 05:05:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f1a4d19f-7421-4f15-9ed3-2d6971368042, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:14 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:14.295 2 INFO neutron.agent.securitygroups_rpc [None req-9dfbfcf8-cd5f-4ab2-ad68-710c1a723a6d 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['d1c18a84-65e4-4a8f-9dbe-dec1719991b1']#033[00m Nov 23 05:05:14 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:14.496 2 INFO neutron.agent.securitygroups_rpc [None req-85d9c75a-2523-4c6c-82af-38821b506d6b 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['d1c18a84-65e4-4a8f-9dbe-dec1719991b1']#033[00m Nov 23 05:05:14 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:05:14 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/62602639' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:05:14 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:05:14 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/62602639' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:05:14 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:14.740 2 INFO neutron.agent.securitygroups_rpc [None req-82a95b4d-7b43-4234-8627-52e2e708ade0 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['d1c18a84-65e4-4a8f-9dbe-dec1719991b1']#033[00m Nov 23 05:05:14 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:14.939 2 INFO neutron.agent.securitygroups_rpc [None req-b4af07d0-a11f-4847-a586-dfc78999259e 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['d1c18a84-65e4-4a8f-9dbe-dec1719991b1']#033[00m Nov 23 05:05:15 localhost nova_compute[280939]: 2025-11-23 10:05:15.131 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:05:15 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:15.142 2 INFO neutron.agent.securitygroups_rpc [None req-5c022b4a-ac2f-4704-91ea-0edb1d6cec16 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['d1c18a84-65e4-4a8f-9dbe-dec1719991b1']#033[00m Nov 23 05:05:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v402: 177 pgs: 177 active+clean; 192 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 85 KiB/s rd, 17 KiB/s wr, 115 op/s Nov 23 05:05:15 localhost neutron_sriov_agent[255165]: 2025-11-23 10:05:15.775 2 INFO neutron.agent.securitygroups_rpc [None req-f781264c-3a54-454a-a5fe-8867df4ebfe6 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['acd8c1db-c86a-40f9-91ab-30bd6f26d43e']#033[00m Nov 23 05:05:15 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b0d4ea08-4592-4af4-b78a-914919545708", "format": "json"}]: dispatch Nov 23 05:05:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b0d4ea08-4592-4af4-b78a-914919545708, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:16 localhost nova_compute[280939]: 2025-11-23 10:05:16.575 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:17 localhost podman[239764]: time="2025-11-23T10:05:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:05:17 localhost podman[239764]: @ - - [23/Nov/2025:10:05:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:05:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:05:17 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2336678244' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:05:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:05:17 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2336678244' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:05:17 localhost nova_compute[280939]: 2025-11-23 10:05:17.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:05:17 localhost podman[239764]: @ - - [23/Nov/2025:10:05:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18737 "" "Go-http-client/1.1" Nov 23 05:05:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v403: 177 pgs: 177 active+clean; 192 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 15 KiB/s wr, 104 op/s Nov 23 05:05:17 localhost nova_compute[280939]: 2025-11-23 10:05:17.990 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:05:18 localhost nova_compute[280939]: 2025-11-23 10:05:18.131 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:05:18 localhost nova_compute[280939]: 2025-11-23 10:05:18.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 05:05:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b0d4ea08-4592-4af4-b78a-914919545708, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:18 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b0d4ea08-4592-4af4-b78a-914919545708", "format": "json"}]: dispatch Nov 23 05:05:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b0d4ea08-4592-4af4-b78a-914919545708, vol_name:cephfs) < "" Nov 23 05:05:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b0d4ea08-4592-4af4-b78a-914919545708, vol_name:cephfs) < "" Nov 23 05:05:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:05:18 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:05:18 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "f1a4d19f-7421-4f15-9ed3-2d6971368042_6a97bbfd-a02a-43cc-a083-440db87e5f59", "force": true, "format": "json"}]: dispatch Nov 23 05:05:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1a4d19f-7421-4f15-9ed3-2d6971368042_6a97bbfd-a02a-43cc-a083-440db87e5f59, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:18 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' Nov 23 05:05:18 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta' Nov 23 05:05:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1a4d19f-7421-4f15-9ed3-2d6971368042_6a97bbfd-a02a-43cc-a083-440db87e5f59, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:18 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "f1a4d19f-7421-4f15-9ed3-2d6971368042", "force": true, "format": "json"}]: dispatch Nov 23 05:05:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1a4d19f-7421-4f15-9ed3-2d6971368042, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:18 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:18.910 262301 INFO neutron.agent.linux.ip_lib [None req-3db12e09-81c4-42a0-b198-0795b17f85da - - - - - -] Device tapc6fd1ede-ed cannot be used as it has no MAC address#033[00m Nov 23 05:05:18 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' Nov 23 05:05:18 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta' Nov 23 05:05:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1a4d19f-7421-4f15-9ed3-2d6971368042, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:18 localhost nova_compute[280939]: 2025-11-23 10:05:18.975 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:18 localhost kernel: device tapc6fd1ede-ed entered promiscuous mode Nov 23 05:05:18 localhost NetworkManager[5966]: [1763892318.9845] manager: (tapc6fd1ede-ed): new Generic device (/org/freedesktop/NetworkManager/Devices/54) Nov 23 05:05:18 localhost nova_compute[280939]: 2025-11-23 10:05:18.985 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:18 localhost ovn_controller[153771]: 2025-11-23T10:05:18Z|00323|binding|INFO|Claiming lport c6fd1ede-ed13-475d-9e6a-df3b3e360ba6 for this chassis. Nov 23 05:05:18 localhost ovn_controller[153771]: 2025-11-23T10:05:18Z|00324|binding|INFO|c6fd1ede-ed13-475d-9e6a-df3b3e360ba6: Claiming unknown Nov 23 05:05:18 localhost systemd-udevd[323528]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:05:19 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:18.999 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-66e5f84c-299e-4a2d-a664-b1aba75fdb49', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66e5f84c-299e-4a2d-a664-b1aba75fdb49', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a088503b43e94251822e3c0e9006a74e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84691e5c-a450-4bce-b51d-1af23fca09f7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c6fd1ede-ed13-475d-9e6a-df3b3e360ba6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:05:19 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:19.000 159415 INFO neutron.agent.ovn.metadata.agent [-] Port c6fd1ede-ed13-475d-9e6a-df3b3e360ba6 in datapath 66e5f84c-299e-4a2d-a664-b1aba75fdb49 bound to our chassis#033[00m Nov 23 05:05:19 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:19.002 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 66e5f84c-299e-4a2d-a664-b1aba75fdb49 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:05:19 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:19.003 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba1812e-b0b3-48f2-824c-c537d3a88306]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:05:19 localhost journal[229336]: ethtool ioctl error on tapc6fd1ede-ed: No such device Nov 23 05:05:19 localhost ovn_controller[153771]: 2025-11-23T10:05:19Z|00325|binding|INFO|Setting lport c6fd1ede-ed13-475d-9e6a-df3b3e360ba6 ovn-installed in OVS Nov 23 05:05:19 localhost ovn_controller[153771]: 2025-11-23T10:05:19Z|00326|binding|INFO|Setting lport c6fd1ede-ed13-475d-9e6a-df3b3e360ba6 up in Southbound Nov 23 05:05:19 localhost nova_compute[280939]: 2025-11-23 10:05:19.024 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:19 localhost journal[229336]: ethtool ioctl error on tapc6fd1ede-ed: No such device Nov 23 05:05:19 localhost journal[229336]: ethtool ioctl error on tapc6fd1ede-ed: No such device Nov 23 05:05:19 localhost journal[229336]: ethtool ioctl error on tapc6fd1ede-ed: No such device Nov 23 05:05:19 localhost journal[229336]: ethtool ioctl error on tapc6fd1ede-ed: No such device Nov 23 05:05:19 localhost journal[229336]: ethtool ioctl error on tapc6fd1ede-ed: No such device Nov 23 05:05:19 localhost journal[229336]: ethtool ioctl error on tapc6fd1ede-ed: No such device Nov 23 05:05:19 localhost journal[229336]: ethtool ioctl error on tapc6fd1ede-ed: No such device Nov 23 05:05:19 localhost nova_compute[280939]: 2025-11-23 10:05:19.072 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:19 localhost nova_compute[280939]: 2025-11-23 10:05:19.102 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:19 localhost sshd[323556]: main: sshd: ssh-rsa algorithm is disabled Nov 23 05:05:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v404: 177 pgs: 177 active+clean; 192 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 70 KiB/s rd, 16 KiB/s wr, 96 op/s Nov 23 05:05:19 localhost podman[323600]: Nov 23 05:05:19 localhost podman[323600]: 2025-11-23 10:05:19.989523188 +0000 UTC m=+0.092075231 container create 05d33d6c7124feb87b4a1b343f89097b67fbff76da5d006b0e91030aa346de2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66e5f84c-299e-4a2d-a664-b1aba75fdb49, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 23 05:05:20 localhost systemd[1]: Started libpod-conmon-05d33d6c7124feb87b4a1b343f89097b67fbff76da5d006b0e91030aa346de2b.scope. Nov 23 05:05:20 localhost podman[323600]: 2025-11-23 10:05:19.941093724 +0000 UTC m=+0.043645827 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:05:20 localhost systemd[1]: tmp-crun.WE3D3K.mount: Deactivated successfully. Nov 23 05:05:20 localhost systemd[1]: Started libcrun container. Nov 23 05:05:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9669c71049b3f22855c0fd5f64f6feaa4c4c7db744996ecbc0a166501ac947/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:05:20 localhost podman[323600]: 2025-11-23 10:05:20.076469558 +0000 UTC m=+0.179021621 container init 05d33d6c7124feb87b4a1b343f89097b67fbff76da5d006b0e91030aa346de2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66e5f84c-299e-4a2d-a664-b1aba75fdb49, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Nov 23 05:05:20 localhost podman[323600]: 2025-11-23 10:05:20.08659708 +0000 UTC m=+0.189149143 container start 05d33d6c7124feb87b4a1b343f89097b67fbff76da5d006b0e91030aa346de2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66e5f84c-299e-4a2d-a664-b1aba75fdb49, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 05:05:20 localhost dnsmasq[323618]: started, version 2.85 cachesize 150 Nov 23 05:05:20 localhost dnsmasq[323618]: DNS service limited to local subnets Nov 23 05:05:20 localhost dnsmasq[323618]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:05:20 localhost dnsmasq[323618]: warning: no upstream servers configured Nov 23 05:05:20 localhost dnsmasq-dhcp[323618]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 23 05:05:20 localhost dnsmasq[323618]: read /var/lib/neutron/dhcp/66e5f84c-299e-4a2d-a664-b1aba75fdb49/addn_hosts - 0 addresses Nov 23 05:05:20 localhost dnsmasq-dhcp[323618]: read /var/lib/neutron/dhcp/66e5f84c-299e-4a2d-a664-b1aba75fdb49/host Nov 23 05:05:20 localhost dnsmasq-dhcp[323618]: read /var/lib/neutron/dhcp/66e5f84c-299e-4a2d-a664-b1aba75fdb49/opts Nov 23 05:05:20 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:20.280 262301 INFO neutron.agent.dhcp.agent [None req-a31f3afa-fdcb-4fbd-b67e-72274aa9e239 - - - - - -] DHCP configuration for ports {'8700c415-cd63-4348-8db9-a370067bd923'} is completed#033[00m Nov 23 05:05:20 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b0d4ea08-4592-4af4-b78a-914919545708", "format": "json"}]: dispatch Nov 23 05:05:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b0d4ea08-4592-4af4-b78a-914919545708, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b0d4ea08-4592-4af4-b78a-914919545708, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:20 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b0d4ea08-4592-4af4-b78a-914919545708", "force": true, "format": "json"}]: dispatch Nov 23 05:05:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b0d4ea08-4592-4af4-b78a-914919545708, vol_name:cephfs) < "" Nov 23 05:05:20 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b0d4ea08-4592-4af4-b78a-914919545708'' moved to trashcan Nov 23 05:05:20 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:05:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b0d4ea08-4592-4af4-b78a-914919545708, vol_name:cephfs) < "" Nov 23 05:05:21 localhost nova_compute[280939]: 2025-11-23 10:05:21.128 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:05:21 localhost nova_compute[280939]: 2025-11-23 10:05:21.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:05:21 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:21.217 262301 INFO neutron.agent.linux.ip_lib [None req-56c3cd4f-7a68-4d8c-a5b9-f7ce70dd78b2 - - - - - -] Device tap1a6117d1-37 cannot be used as it has no MAC address#033[00m Nov 23 05:05:21 localhost nova_compute[280939]: 2025-11-23 10:05:21.268 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:21 localhost kernel: device tap1a6117d1-37 entered promiscuous mode Nov 23 05:05:21 localhost NetworkManager[5966]: [1763892321.2761] manager: (tap1a6117d1-37): new Generic device (/org/freedesktop/NetworkManager/Devices/55) Nov 23 05:05:21 localhost systemd-udevd[323530]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:05:21 localhost nova_compute[280939]: 2025-11-23 10:05:21.276 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:21 localhost ovn_controller[153771]: 2025-11-23T10:05:21Z|00327|binding|INFO|Claiming lport 1a6117d1-37b5-4c87-b5e3-7e86f6f14451 for this chassis. Nov 23 05:05:21 localhost ovn_controller[153771]: 2025-11-23T10:05:21Z|00328|binding|INFO|1a6117d1-37b5-4c87-b5e3-7e86f6f14451: Claiming unknown Nov 23 05:05:21 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:21.287 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-00211e6c-d447-4450-a8d4-10ece4e8680a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00211e6c-d447-4450-a8d4-10ece4e8680a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a088503b43e94251822e3c0e9006a74e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=98fab572-9ec9-45cd-8919-091a4e09858f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1a6117d1-37b5-4c87-b5e3-7e86f6f14451) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:05:21 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:21.288 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 1a6117d1-37b5-4c87-b5e3-7e86f6f14451 in datapath 00211e6c-d447-4450-a8d4-10ece4e8680a bound to our chassis#033[00m Nov 23 05:05:21 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:21.290 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 00211e6c-d447-4450-a8d4-10ece4e8680a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:05:21 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:21.290 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[92a2b878-629c-451a-ac3e-bfeb38b91fe9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:05:21 localhost journal[229336]: ethtool ioctl error on tap1a6117d1-37: No such device Nov 23 05:05:21 localhost journal[229336]: ethtool ioctl error on tap1a6117d1-37: No such device Nov 23 05:05:21 localhost ovn_controller[153771]: 2025-11-23T10:05:21Z|00329|binding|INFO|Setting lport 1a6117d1-37b5-4c87-b5e3-7e86f6f14451 ovn-installed in OVS Nov 23 05:05:21 localhost ovn_controller[153771]: 2025-11-23T10:05:21Z|00330|binding|INFO|Setting lport 1a6117d1-37b5-4c87-b5e3-7e86f6f14451 up in Southbound Nov 23 05:05:21 localhost nova_compute[280939]: 2025-11-23 10:05:21.314 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:21 localhost journal[229336]: ethtool ioctl error on tap1a6117d1-37: No such device Nov 23 05:05:21 localhost journal[229336]: ethtool ioctl error on tap1a6117d1-37: No such device Nov 23 05:05:21 localhost journal[229336]: ethtool ioctl error on tap1a6117d1-37: No such device Nov 23 05:05:21 localhost journal[229336]: ethtool ioctl error on tap1a6117d1-37: No such device Nov 23 05:05:21 localhost journal[229336]: ethtool ioctl error on tap1a6117d1-37: No such device Nov 23 05:05:21 localhost journal[229336]: ethtool ioctl error on tap1a6117d1-37: No such device Nov 23 05:05:21 localhost nova_compute[280939]: 2025-11-23 10:05:21.351 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:21 localhost nova_compute[280939]: 2025-11-23 10:05:21.378 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v405: 177 pgs: 177 active+clean; 192 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 70 KiB/s rd, 16 KiB/s wr, 96 op/s Nov 23 05:05:21 localhost nova_compute[280939]: 2025-11-23 10:05:21.576 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:22 localhost podman[323698]: Nov 23 05:05:22 localhost podman[323698]: 2025-11-23 10:05:22.177977866 +0000 UTC m=+0.092368790 container create f9f5b009013a43ef48955d6e57bd78374dd36605078ebc59cc16eec653be62c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-00211e6c-d447-4450-a8d4-10ece4e8680a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 05:05:22 localhost systemd[1]: Started libpod-conmon-f9f5b009013a43ef48955d6e57bd78374dd36605078ebc59cc16eec653be62c7.scope. Nov 23 05:05:22 localhost systemd[1]: tmp-crun.CHg8yb.mount: Deactivated successfully. Nov 23 05:05:22 localhost podman[323698]: 2025-11-23 10:05:22.133226846 +0000 UTC m=+0.047617810 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:05:22 localhost systemd[1]: Started libcrun container. Nov 23 05:05:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cecd178ed8e6757cee9786bedacde0ed32ba535db4f62d79f55d5562a9512c4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:05:22 localhost podman[323698]: 2025-11-23 10:05:22.24819 +0000 UTC m=+0.162580934 container init f9f5b009013a43ef48955d6e57bd78374dd36605078ebc59cc16eec653be62c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-00211e6c-d447-4450-a8d4-10ece4e8680a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:05:22 localhost podman[323698]: 2025-11-23 10:05:22.261890923 +0000 UTC m=+0.176281847 container start f9f5b009013a43ef48955d6e57bd78374dd36605078ebc59cc16eec653be62c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-00211e6c-d447-4450-a8d4-10ece4e8680a, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:05:22 localhost dnsmasq[323716]: started, version 2.85 cachesize 150 Nov 23 05:05:22 localhost dnsmasq[323716]: DNS service limited to local subnets Nov 23 05:05:22 localhost dnsmasq[323716]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:05:22 localhost dnsmasq[323716]: warning: no upstream servers configured Nov 23 05:05:22 localhost dnsmasq-dhcp[323716]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 23 05:05:22 localhost dnsmasq[323716]: read /var/lib/neutron/dhcp/00211e6c-d447-4450-a8d4-10ece4e8680a/addn_hosts - 0 addresses Nov 23 05:05:22 localhost dnsmasq-dhcp[323716]: read /var/lib/neutron/dhcp/00211e6c-d447-4450-a8d4-10ece4e8680a/host Nov 23 05:05:22 localhost dnsmasq-dhcp[323716]: read /var/lib/neutron/dhcp/00211e6c-d447-4450-a8d4-10ece4e8680a/opts Nov 23 05:05:22 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:22.476 262301 INFO neutron.agent.dhcp.agent [None req-eb18e427-88eb-45c9-893e-216ae5ad6a1d - - - - - -] DHCP configuration for ports {'ffbdbc49-d8df-48b5-90b7-9020906cc639'} is completed#033[00m Nov 23 05:05:23 localhost nova_compute[280939]: 2025-11-23 10:05:23.029 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:05:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:05:23 localhost nova_compute[280939]: 2025-11-23 10:05:23.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:05:23 localhost podman[323717]: 2025-11-23 10:05:23.148049246 +0000 UTC m=+0.082785713 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent) Nov 23 05:05:23 localhost nova_compute[280939]: 2025-11-23 10:05:23.154 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:05:23 localhost nova_compute[280939]: 2025-11-23 10:05:23.154 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:05:23 localhost nova_compute[280939]: 2025-11-23 10:05:23.155 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:05:23 localhost nova_compute[280939]: 2025-11-23 10:05:23.155 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 05:05:23 localhost nova_compute[280939]: 2025-11-23 10:05:23.155 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:05:23 localhost systemd[1]: tmp-crun.4lXaug.mount: Deactivated successfully. Nov 23 05:05:23 localhost podman[323717]: 2025-11-23 10:05:23.187461422 +0000 UTC m=+0.122197859 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:05:23 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:05:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_10:05:23 Nov 23 05:05:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 05:05:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 05:05:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['.mgr', 'manila_data', 'manila_metadata', 'vms', 'volumes', 'backups', 'images'] Nov 23 05:05:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 05:05:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:05:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:05:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:05:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:05:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:05:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:05:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v406: 177 pgs: 177 active+clean; 192 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 15 KiB/s wr, 88 op/s Nov 23 05:05:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 05:05:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:05:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 05:05:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:05:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Nov 23 05:05:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:05:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014848366543364973 of space, bias 1.0, pg target 0.29647238531585396 quantized to 32 (current 32) Nov 23 05:05:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:05:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 23 05:05:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:05:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 23 05:05:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:05:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.635783082077052e-06 of space, bias 1.0, pg target 0.0003255208333333333 quantized to 32 (current 32) Nov 23 05:05:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:05:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 5.888819095477387e-05 of space, bias 4.0, pg target 0.046875 quantized to 16 (current 16) Nov 23 05:05:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 05:05:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:05:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 05:05:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:05:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:05:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:05:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:05:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:05:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:05:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:05:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:05:23 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2942325306' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:05:23 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "89bb6ad8-e75c-4c65-a495-a2d153e513e0", "format": "json"}]: dispatch Nov 23 05:05:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:89bb6ad8-e75c-4c65-a495-a2d153e513e0, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:23 localhost nova_compute[280939]: 2025-11-23 10:05:23.597 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:05:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:89bb6ad8-e75c-4c65-a495-a2d153e513e0, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:23 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7e0a27d8-81b8-442f-a3db-fa38a09d28ae", "snap_name": "5aacd1b6-f8f5-4003-8630-0121025e58d0_0a0bb169-42ba-4ed1-8452-a7671d059061", "force": true, "format": "json"}]: dispatch Nov 23 05:05:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5aacd1b6-f8f5-4003-8630-0121025e58d0_0a0bb169-42ba-4ed1-8452-a7671d059061, sub_name:7e0a27d8-81b8-442f-a3db-fa38a09d28ae, vol_name:cephfs) < "" Nov 23 05:05:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae/.meta.tmp' Nov 23 05:05:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae/.meta.tmp' to config b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae/.meta' Nov 23 05:05:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5aacd1b6-f8f5-4003-8630-0121025e58d0_0a0bb169-42ba-4ed1-8452-a7671d059061, sub_name:7e0a27d8-81b8-442f-a3db-fa38a09d28ae, vol_name:cephfs) < "" Nov 23 05:05:23 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7e0a27d8-81b8-442f-a3db-fa38a09d28ae", "snap_name": "5aacd1b6-f8f5-4003-8630-0121025e58d0", "force": true, "format": "json"}]: dispatch Nov 23 05:05:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5aacd1b6-f8f5-4003-8630-0121025e58d0, sub_name:7e0a27d8-81b8-442f-a3db-fa38a09d28ae, vol_name:cephfs) < "" Nov 23 05:05:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae/.meta.tmp' Nov 23 05:05:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae/.meta.tmp' to config b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae/.meta' Nov 23 05:05:23 localhost nova_compute[280939]: 2025-11-23 10:05:23.821 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 05:05:23 localhost nova_compute[280939]: 2025-11-23 10:05:23.823 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11465MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 05:05:23 localhost nova_compute[280939]: 2025-11-23 10:05:23.823 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:05:23 localhost nova_compute[280939]: 2025-11-23 10:05:23.824 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:05:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5aacd1b6-f8f5-4003-8630-0121025e58d0, sub_name:7e0a27d8-81b8-442f-a3db-fa38a09d28ae, vol_name:cephfs) < "" Nov 23 05:05:23 localhost nova_compute[280939]: 2025-11-23 10:05:23.954 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 05:05:23 localhost nova_compute[280939]: 2025-11-23 10:05:23.955 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 05:05:23 localhost nova_compute[280939]: 2025-11-23 10:05:23.988 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:05:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e190 do_prune osdmap full prune enabled Nov 23 05:05:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e191 e191: 6 total, 6 up, 6 in Nov 23 05:05:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e191: 6 total, 6 up, 6 in Nov 23 05:05:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:05:24 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2092633703' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:05:24 localhost nova_compute[280939]: 2025-11-23 10:05:24.480 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:05:24 localhost nova_compute[280939]: 2025-11-23 10:05:24.487 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 05:05:24 localhost nova_compute[280939]: 2025-11-23 10:05:24.505 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 05:05:24 localhost nova_compute[280939]: 2025-11-23 10:05:24.507 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 05:05:24 localhost nova_compute[280939]: 2025-11-23 10:05:24.507 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:05:25 localhost ceph-mgr[286671]: [devicehealth INFO root] Check health Nov 23 05:05:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v408: 177 pgs: 4 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 169 active+clean; 193 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 22 KiB/s wr, 77 op/s Nov 23 05:05:25 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:25.870 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:05:25Z, description=, device_id=292abd12-0f11-4ce1-b8f1-44a94fd7bb57, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4d48c3d5-b51e-4e58-959e-1007400421a3, ip_allocation=immediate, mac_address=fa:16:3e:31:cf:b1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:05:17Z, description=, dns_domain=, id=66e5f84c-299e-4a2d-a664-b1aba75fdb49, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeIpV6Test-test-network-1432521315, port_security_enabled=True, project_id=a088503b43e94251822e3c0e9006a74e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=3567, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2794, status=ACTIVE, subnets=['37e30a20-f0b4-4d91-afa3-64139517e8f0'], tags=[], tenant_id=a088503b43e94251822e3c0e9006a74e, updated_at=2025-11-23T10:05:18Z, vlan_transparent=None, network_id=66e5f84c-299e-4a2d-a664-b1aba75fdb49, port_security_enabled=False, project_id=a088503b43e94251822e3c0e9006a74e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2855, status=DOWN, tags=[], tenant_id=a088503b43e94251822e3c0e9006a74e, updated_at=2025-11-23T10:05:25Z on network 66e5f84c-299e-4a2d-a664-b1aba75fdb49#033[00m Nov 23 05:05:26 localhost dnsmasq[323618]: read /var/lib/neutron/dhcp/66e5f84c-299e-4a2d-a664-b1aba75fdb49/addn_hosts - 1 addresses Nov 23 05:05:26 localhost podman[323796]: 2025-11-23 10:05:26.06727121 +0000 UTC m=+0.061385144 container kill 05d33d6c7124feb87b4a1b343f89097b67fbff76da5d006b0e91030aa346de2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66e5f84c-299e-4a2d-a664-b1aba75fdb49, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS) Nov 23 05:05:26 localhost dnsmasq-dhcp[323618]: read /var/lib/neutron/dhcp/66e5f84c-299e-4a2d-a664-b1aba75fdb49/host Nov 23 05:05:26 localhost dnsmasq-dhcp[323618]: read /var/lib/neutron/dhcp/66e5f84c-299e-4a2d-a664-b1aba75fdb49/opts Nov 23 05:05:26 localhost systemd[1]: tmp-crun.4ditUI.mount: Deactivated successfully. Nov 23 05:05:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e191 do_prune osdmap full prune enabled Nov 23 05:05:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e192 e192: 6 total, 6 up, 6 in Nov 23 05:05:26 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Nov 23 05:05:26 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e192: 6 total, 6 up, 6 in Nov 23 05:05:26 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:26.350 262301 INFO neutron.agent.dhcp.agent [None req-7178f7c2-6c48-4005-8268-62a964984bde - - - - - -] DHCP configuration for ports {'4d48c3d5-b51e-4e58-959e-1007400421a3'} is completed#033[00m Nov 23 05:05:26 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:26.434 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:05:25Z, description=, device_id=292abd12-0f11-4ce1-b8f1-44a94fd7bb57, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4d48c3d5-b51e-4e58-959e-1007400421a3, ip_allocation=immediate, mac_address=fa:16:3e:31:cf:b1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:05:17Z, description=, dns_domain=, id=66e5f84c-299e-4a2d-a664-b1aba75fdb49, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeIpV6Test-test-network-1432521315, port_security_enabled=True, project_id=a088503b43e94251822e3c0e9006a74e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=3567, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2794, status=ACTIVE, subnets=['37e30a20-f0b4-4d91-afa3-64139517e8f0'], tags=[], tenant_id=a088503b43e94251822e3c0e9006a74e, updated_at=2025-11-23T10:05:18Z, vlan_transparent=None, network_id=66e5f84c-299e-4a2d-a664-b1aba75fdb49, port_security_enabled=False, project_id=a088503b43e94251822e3c0e9006a74e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2855, status=DOWN, tags=[], tenant_id=a088503b43e94251822e3c0e9006a74e, updated_at=2025-11-23T10:05:25Z on network 66e5f84c-299e-4a2d-a664-b1aba75fdb49#033[00m Nov 23 05:05:26 localhost nova_compute[280939]: 2025-11-23 10:05:26.503 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:05:26 localhost nova_compute[280939]: 2025-11-23 10:05:26.579 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:26 localhost dnsmasq[323618]: read /var/lib/neutron/dhcp/66e5f84c-299e-4a2d-a664-b1aba75fdb49/addn_hosts - 1 addresses Nov 23 05:05:26 localhost dnsmasq-dhcp[323618]: read /var/lib/neutron/dhcp/66e5f84c-299e-4a2d-a664-b1aba75fdb49/host Nov 23 05:05:26 localhost dnsmasq-dhcp[323618]: read /var/lib/neutron/dhcp/66e5f84c-299e-4a2d-a664-b1aba75fdb49/opts Nov 23 05:05:26 localhost podman[323836]: 2025-11-23 10:05:26.628120033 +0000 UTC m=+0.057239645 container kill 05d33d6c7124feb87b4a1b343f89097b67fbff76da5d006b0e91030aa346de2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66e5f84c-299e-4a2d-a664-b1aba75fdb49, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 05:05:26 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:26.933 262301 INFO neutron.agent.dhcp.agent [None req-d31c1e70-d497-4255-b407-e64b23c19a67 - - - - - -] DHCP configuration for ports {'4d48c3d5-b51e-4e58-959e-1007400421a3'} is completed#033[00m Nov 23 05:05:26 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7e0a27d8-81b8-442f-a3db-fa38a09d28ae", "format": "json"}]: dispatch Nov 23 05:05:26 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7e0a27d8-81b8-442f-a3db-fa38a09d28ae, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:26 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7e0a27d8-81b8-442f-a3db-fa38a09d28ae, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:26 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7e0a27d8-81b8-442f-a3db-fa38a09d28ae' of type subvolume Nov 23 05:05:26 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:05:26.970+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7e0a27d8-81b8-442f-a3db-fa38a09d28ae' of type subvolume Nov 23 05:05:26 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7e0a27d8-81b8-442f-a3db-fa38a09d28ae", "force": true, "format": "json"}]: dispatch Nov 23 05:05:26 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7e0a27d8-81b8-442f-a3db-fa38a09d28ae, vol_name:cephfs) < "" Nov 23 05:05:26 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7e0a27d8-81b8-442f-a3db-fa38a09d28ae'' moved to trashcan Nov 23 05:05:26 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:05:26 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7e0a27d8-81b8-442f-a3db-fa38a09d28ae, vol_name:cephfs) < "" Nov 23 05:05:27 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "b6b84a30-76d0-4c7a-ba18-7c9a32f848c3", "format": "json"}]: dispatch Nov 23 05:05:27 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b6b84a30-76d0-4c7a-ba18-7c9a32f848c3, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:27 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b6b84a30-76d0-4c7a-ba18-7c9a32f848c3, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v410: 177 pgs: 4 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 169 active+clean; 193 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 17 KiB/s wr, 44 op/s Nov 23 05:05:27 localhost dnsmasq[323618]: read /var/lib/neutron/dhcp/66e5f84c-299e-4a2d-a664-b1aba75fdb49/addn_hosts - 0 addresses Nov 23 05:05:27 localhost dnsmasq-dhcp[323618]: read /var/lib/neutron/dhcp/66e5f84c-299e-4a2d-a664-b1aba75fdb49/host Nov 23 05:05:27 localhost dnsmasq-dhcp[323618]: read /var/lib/neutron/dhcp/66e5f84c-299e-4a2d-a664-b1aba75fdb49/opts Nov 23 05:05:27 localhost podman[323875]: 2025-11-23 10:05:27.571887974 +0000 UTC m=+0.074724205 container kill 05d33d6c7124feb87b4a1b343f89097b67fbff76da5d006b0e91030aa346de2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66e5f84c-299e-4a2d-a664-b1aba75fdb49, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 05:05:27 localhost nova_compute[280939]: 2025-11-23 10:05:27.796 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:27 localhost kernel: device tapc6fd1ede-ed left promiscuous mode Nov 23 05:05:27 localhost ovn_controller[153771]: 2025-11-23T10:05:27Z|00331|binding|INFO|Releasing lport c6fd1ede-ed13-475d-9e6a-df3b3e360ba6 from this chassis (sb_readonly=0) Nov 23 05:05:27 localhost ovn_controller[153771]: 2025-11-23T10:05:27Z|00332|binding|INFO|Setting lport c6fd1ede-ed13-475d-9e6a-df3b3e360ba6 down in Southbound Nov 23 05:05:27 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:27.813 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-66e5f84c-299e-4a2d-a664-b1aba75fdb49', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66e5f84c-299e-4a2d-a664-b1aba75fdb49', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a088503b43e94251822e3c0e9006a74e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84691e5c-a450-4bce-b51d-1af23fca09f7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c6fd1ede-ed13-475d-9e6a-df3b3e360ba6) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:05:27 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:27.815 159415 INFO neutron.agent.ovn.metadata.agent [-] Port c6fd1ede-ed13-475d-9e6a-df3b3e360ba6 in datapath 66e5f84c-299e-4a2d-a664-b1aba75fdb49 unbound from our chassis#033[00m Nov 23 05:05:27 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:27.817 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 66e5f84c-299e-4a2d-a664-b1aba75fdb49 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:05:27 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:27.818 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[849c8750-aeff-4328-bca8-2c2e590c2d00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:05:27 localhost nova_compute[280939]: 2025-11-23 10:05:27.819 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:28 localhost nova_compute[280939]: 2025-11-23 10:05:28.028 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:05:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e192 do_prune osdmap full prune enabled Nov 23 05:05:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e193 e193: 6 total, 6 up, 6 in Nov 23 05:05:28 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e193: 6 total, 6 up, 6 in Nov 23 05:05:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e193 do_prune osdmap full prune enabled Nov 23 05:05:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e194 e194: 6 total, 6 up, 6 in Nov 23 05:05:29 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e194: 6 total, 6 up, 6 in Nov 23 05:05:29 localhost systemd[1]: tmp-crun.JcZY73.mount: Deactivated successfully. Nov 23 05:05:29 localhost dnsmasq[323716]: exiting on receipt of SIGTERM Nov 23 05:05:29 localhost podman[323915]: 2025-11-23 10:05:29.172439277 +0000 UTC m=+0.075380846 container kill f9f5b009013a43ef48955d6e57bd78374dd36605078ebc59cc16eec653be62c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-00211e6c-d447-4450-a8d4-10ece4e8680a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Nov 23 05:05:29 localhost systemd[1]: libpod-f9f5b009013a43ef48955d6e57bd78374dd36605078ebc59cc16eec653be62c7.scope: Deactivated successfully. Nov 23 05:05:29 localhost podman[323927]: 2025-11-23 10:05:29.24227383 +0000 UTC m=+0.057393461 container died f9f5b009013a43ef48955d6e57bd78374dd36605078ebc59cc16eec653be62c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-00211e6c-d447-4450-a8d4-10ece4e8680a, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:05:29 localhost podman[323927]: 2025-11-23 10:05:29.274106751 +0000 UTC m=+0.089226332 container cleanup f9f5b009013a43ef48955d6e57bd78374dd36605078ebc59cc16eec653be62c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-00211e6c-d447-4450-a8d4-10ece4e8680a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 23 05:05:29 localhost systemd[1]: libpod-conmon-f9f5b009013a43ef48955d6e57bd78374dd36605078ebc59cc16eec653be62c7.scope: Deactivated successfully. Nov 23 05:05:29 localhost podman[323934]: 2025-11-23 10:05:29.299954608 +0000 UTC m=+0.097814667 container remove f9f5b009013a43ef48955d6e57bd78374dd36605078ebc59cc16eec653be62c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-00211e6c-d447-4450-a8d4-10ece4e8680a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2) Nov 23 05:05:29 localhost nova_compute[280939]: 2025-11-23 10:05:29.353 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:29 localhost ovn_controller[153771]: 2025-11-23T10:05:29Z|00333|binding|INFO|Releasing lport 1a6117d1-37b5-4c87-b5e3-7e86f6f14451 from this chassis (sb_readonly=0) Nov 23 05:05:29 localhost ovn_controller[153771]: 2025-11-23T10:05:29Z|00334|binding|INFO|Setting lport 1a6117d1-37b5-4c87-b5e3-7e86f6f14451 down in Southbound Nov 23 05:05:29 localhost kernel: device tap1a6117d1-37 left promiscuous mode Nov 23 05:05:29 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:29.366 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-00211e6c-d447-4450-a8d4-10ece4e8680a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00211e6c-d447-4450-a8d4-10ece4e8680a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a088503b43e94251822e3c0e9006a74e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=98fab572-9ec9-45cd-8919-091a4e09858f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1a6117d1-37b5-4c87-b5e3-7e86f6f14451) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:05:29 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:29.369 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 1a6117d1-37b5-4c87-b5e3-7e86f6f14451 in datapath 00211e6c-d447-4450-a8d4-10ece4e8680a unbound from our chassis#033[00m Nov 23 05:05:29 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:29.370 159415 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 00211e6c-d447-4450-a8d4-10ece4e8680a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 23 05:05:29 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:29.371 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[04f8bd01-c592-4c70-8042-1bde8b831287]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:05:29 localhost nova_compute[280939]: 2025-11-23 10:05:29.374 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v413: 177 pgs: 177 active+clean; 193 MiB data, 985 MiB used, 41 GiB / 42 GiB avail; 130 KiB/s rd, 51 KiB/s wr, 187 op/s Nov 23 05:05:29 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:29.554 262301 INFO neutron.agent.dhcp.agent [None req-588d4644-3989-48f4-9ae5-d0752e8491d4 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:05:29 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:29.554 262301 INFO neutron.agent.dhcp.agent [None req-588d4644-3989-48f4-9ae5-d0752e8491d4 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:05:29 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:29.678 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:05:29 localhost nova_compute[280939]: 2025-11-23 10:05:29.845 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e194 do_prune osdmap full prune enabled Nov 23 05:05:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e195 e195: 6 total, 6 up, 6 in Nov 23 05:05:30 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e195: 6 total, 6 up, 6 in Nov 23 05:05:30 localhost systemd[1]: var-lib-containers-storage-overlay-7cecd178ed8e6757cee9786bedacde0ed32ba535db4f62d79f55d5562a9512c4-merged.mount: Deactivated successfully. Nov 23 05:05:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f9f5b009013a43ef48955d6e57bd78374dd36605078ebc59cc16eec653be62c7-userdata-shm.mount: Deactivated successfully. Nov 23 05:05:30 localhost systemd[1]: run-netns-qdhcp\x2d00211e6c\x2dd447\x2d4450\x2da8d4\x2d10ece4e8680a.mount: Deactivated successfully. Nov 23 05:05:30 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "faed27e0-c4c3-4299-836c-04ce6fc7a80d", "format": "json"}]: dispatch Nov 23 05:05:30 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:faed27e0-c4c3-4299-836c-04ce6fc7a80d, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:30 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:faed27e0-c4c3-4299-836c-04ce6fc7a80d, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:05:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:05:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:05:30 localhost systemd[1]: tmp-crun.xRGKHD.mount: Deactivated successfully. Nov 23 05:05:30 localhost podman[323957]: 2025-11-23 10:05:30.904355569 +0000 UTC m=+0.091664018 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 05:05:30 localhost podman[323959]: 2025-11-23 10:05:30.942549427 +0000 UTC m=+0.128048460 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller) Nov 23 05:05:31 localhost podman[323958]: 2025-11-23 10:05:31.015481966 +0000 UTC m=+0.203052963 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 05:05:31 localhost podman[323957]: 2025-11-23 10:05:31.020397367 +0000 UTC m=+0.207705746 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=multipathd) Nov 23 05:05:31 localhost podman[323958]: 2025-11-23 10:05:31.028241469 +0000 UTC m=+0.215812516 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:05:31 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:05:31 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:05:31 localhost podman[323959]: 2025-11-23 10:05:31.048490423 +0000 UTC m=+0.233989446 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 05:05:31 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:05:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v415: 177 pgs: 177 active+clean; 193 MiB data, 985 MiB used, 41 GiB / 42 GiB avail; 85 KiB/s rd, 25 KiB/s wr, 119 op/s Nov 23 05:05:31 localhost nova_compute[280939]: 2025-11-23 10:05:31.580 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:31 localhost dnsmasq[323618]: exiting on receipt of SIGTERM Nov 23 05:05:31 localhost podman[324042]: 2025-11-23 10:05:31.586324676 +0000 UTC m=+0.052905822 container kill 05d33d6c7124feb87b4a1b343f89097b67fbff76da5d006b0e91030aa346de2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66e5f84c-299e-4a2d-a664-b1aba75fdb49, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:05:31 localhost systemd[1]: libpod-05d33d6c7124feb87b4a1b343f89097b67fbff76da5d006b0e91030aa346de2b.scope: Deactivated successfully. Nov 23 05:05:31 localhost podman[324056]: 2025-11-23 10:05:31.647464422 +0000 UTC m=+0.050292202 container died 05d33d6c7124feb87b4a1b343f89097b67fbff76da5d006b0e91030aa346de2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66e5f84c-299e-4a2d-a664-b1aba75fdb49, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:05:31 localhost podman[324056]: 2025-11-23 10:05:31.676790777 +0000 UTC m=+0.079618537 container cleanup 05d33d6c7124feb87b4a1b343f89097b67fbff76da5d006b0e91030aa346de2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66e5f84c-299e-4a2d-a664-b1aba75fdb49, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118) Nov 23 05:05:31 localhost systemd[1]: libpod-conmon-05d33d6c7124feb87b4a1b343f89097b67fbff76da5d006b0e91030aa346de2b.scope: Deactivated successfully. Nov 23 05:05:31 localhost podman[324058]: 2025-11-23 10:05:31.699797056 +0000 UTC m=+0.092849934 container remove 05d33d6c7124feb87b4a1b343f89097b67fbff76da5d006b0e91030aa346de2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66e5f84c-299e-4a2d-a664-b1aba75fdb49, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0) Nov 23 05:05:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e195 do_prune osdmap full prune enabled Nov 23 05:05:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:32.151 262301 INFO neutron.agent.dhcp.agent [None req-b861b22b-f55f-4aad-8732-596e4c0633b1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:05:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:32.152 262301 INFO neutron.agent.dhcp.agent [None req-b861b22b-f55f-4aad-8732-596e4c0633b1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:05:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e196 e196: 6 total, 6 up, 6 in Nov 23 05:05:32 localhost systemd[1]: var-lib-containers-storage-overlay-de9669c71049b3f22855c0fd5f64f6feaa4c4c7db744996ecbc0a166501ac947-merged.mount: Deactivated successfully. Nov 23 05:05:32 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05d33d6c7124feb87b4a1b343f89097b67fbff76da5d006b0e91030aa346de2b-userdata-shm.mount: Deactivated successfully. Nov 23 05:05:32 localhost systemd[1]: run-netns-qdhcp\x2d66e5f84c\x2d299e\x2d4a2d\x2da664\x2db1aba75fdb49.mount: Deactivated successfully. Nov 23 05:05:32 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e196: 6 total, 6 up, 6 in Nov 23 05:05:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:05:32.225 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:05:32 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f9268685-6943-4f1d-b505-be8ba848d173", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:05:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f9268685-6943-4f1d-b505-be8ba848d173, vol_name:cephfs) < "" Nov 23 05:05:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:05:33 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1626370391' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:05:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:05:33 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1626370391' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:05:33 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f9268685-6943-4f1d-b505-be8ba848d173/.meta.tmp' Nov 23 05:05:33 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f9268685-6943-4f1d-b505-be8ba848d173/.meta.tmp' to config b'/volumes/_nogroup/f9268685-6943-4f1d-b505-be8ba848d173/.meta' Nov 23 05:05:33 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f9268685-6943-4f1d-b505-be8ba848d173, vol_name:cephfs) < "" Nov 23 05:05:33 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f9268685-6943-4f1d-b505-be8ba848d173", "format": "json"}]: dispatch Nov 23 05:05:33 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f9268685-6943-4f1d-b505-be8ba848d173, vol_name:cephfs) < "" Nov 23 05:05:33 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f9268685-6943-4f1d-b505-be8ba848d173, vol_name:cephfs) < "" Nov 23 05:05:33 localhost nova_compute[280939]: 2025-11-23 10:05:33.030 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:05:33 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:05:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:05:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e196 do_prune osdmap full prune enabled Nov 23 05:05:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e197 e197: 6 total, 6 up, 6 in Nov 23 05:05:33 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e197: 6 total, 6 up, 6 in Nov 23 05:05:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v418: 177 pgs: 177 active+clean; 193 MiB data, 985 MiB used, 41 GiB / 42 GiB avail; 100 KiB/s rd, 30 KiB/s wr, 140 op/s Nov 23 05:05:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e197 do_prune osdmap full prune enabled Nov 23 05:05:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e198 e198: 6 total, 6 up, 6 in Nov 23 05:05:34 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e198: 6 total, 6 up, 6 in Nov 23 05:05:34 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "bdd7b00d-cad7-4915-8d2a-92e849730947", "format": "json"}]: dispatch Nov 23 05:05:34 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bdd7b00d-cad7-4915-8d2a-92e849730947, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:34 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bdd7b00d-cad7-4915-8d2a-92e849730947, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e198 do_prune osdmap full prune enabled Nov 23 05:05:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e199 e199: 6 total, 6 up, 6 in Nov 23 05:05:35 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e199: 6 total, 6 up, 6 in Nov 23 05:05:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v421: 177 pgs: 177 active+clean; 193 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 81 KiB/s rd, 23 KiB/s wr, 114 op/s Nov 23 05:05:36 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f9268685-6943-4f1d-b505-be8ba848d173", "snap_name": "d4feed24-fd18-43f4-8007-428f7bbb28da", "format": "json"}]: dispatch Nov 23 05:05:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d4feed24-fd18-43f4-8007-428f7bbb28da, sub_name:f9268685-6943-4f1d-b505-be8ba848d173, vol_name:cephfs) < "" Nov 23 05:05:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d4feed24-fd18-43f4-8007-428f7bbb28da, sub_name:f9268685-6943-4f1d-b505-be8ba848d173, vol_name:cephfs) < "" Nov 23 05:05:36 localhost nova_compute[280939]: 2025-11-23 10:05:36.614 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:36 localhost openstack_network_exporter[241732]: ERROR 10:05:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:05:36 localhost openstack_network_exporter[241732]: ERROR 10:05:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:05:36 localhost openstack_network_exporter[241732]: ERROR 10:05:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:05:36 localhost openstack_network_exporter[241732]: ERROR 10:05:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:05:36 localhost openstack_network_exporter[241732]: Nov 23 05:05:36 localhost openstack_network_exporter[241732]: ERROR 10:05:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:05:36 localhost openstack_network_exporter[241732]: Nov 23 05:05:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:05:36 localhost podman[324084]: 2025-11-23 10:05:36.896574554 +0000 UTC m=+0.084424074 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, architecture=x86_64, vcs-type=git, version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Nov 23 05:05:36 localhost podman[324084]: 2025-11-23 10:05:36.912421073 +0000 UTC m=+0.100270603 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.) Nov 23 05:05:36 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:05:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e199 do_prune osdmap full prune enabled Nov 23 05:05:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e200 e200: 6 total, 6 up, 6 in Nov 23 05:05:37 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e200: 6 total, 6 up, 6 in Nov 23 05:05:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v423: 177 pgs: 177 active+clean; 193 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 21 KiB/s wr, 103 op/s Nov 23 05:05:38 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "48839c19-f8e5-4cca-b5e6-993e217ef0c0", "format": "json"}]: dispatch Nov 23 05:05:38 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:48839c19-f8e5-4cca-b5e6-993e217ef0c0, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:38 localhost nova_compute[280939]: 2025-11-23 10:05:38.080 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:05:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e200 do_prune osdmap full prune enabled Nov 23 05:05:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e201 e201: 6 total, 6 up, 6 in Nov 23 05:05:38 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e201: 6 total, 6 up, 6 in Nov 23 05:05:38 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:48839c19-f8e5-4cca-b5e6-993e217ef0c0, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e201 do_prune osdmap full prune enabled Nov 23 05:05:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e202 e202: 6 total, 6 up, 6 in Nov 23 05:05:39 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e202: 6 total, 6 up, 6 in Nov 23 05:05:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v426: 177 pgs: 4 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 170 active+clean; 193 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 26 KiB/s wr, 88 op/s Nov 23 05:05:39 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f9268685-6943-4f1d-b505-be8ba848d173", "snap_name": "d4feed24-fd18-43f4-8007-428f7bbb28da_9b2a4eed-44cc-4e60-96ac-820b9b4186d6", "force": true, "format": "json"}]: dispatch Nov 23 05:05:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d4feed24-fd18-43f4-8007-428f7bbb28da_9b2a4eed-44cc-4e60-96ac-820b9b4186d6, sub_name:f9268685-6943-4f1d-b505-be8ba848d173, vol_name:cephfs) < "" Nov 23 05:05:39 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f9268685-6943-4f1d-b505-be8ba848d173/.meta.tmp' Nov 23 05:05:39 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f9268685-6943-4f1d-b505-be8ba848d173/.meta.tmp' to config b'/volumes/_nogroup/f9268685-6943-4f1d-b505-be8ba848d173/.meta' Nov 23 05:05:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d4feed24-fd18-43f4-8007-428f7bbb28da_9b2a4eed-44cc-4e60-96ac-820b9b4186d6, sub_name:f9268685-6943-4f1d-b505-be8ba848d173, vol_name:cephfs) < "" Nov 23 05:05:39 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f9268685-6943-4f1d-b505-be8ba848d173", "snap_name": "d4feed24-fd18-43f4-8007-428f7bbb28da", "force": true, "format": "json"}]: dispatch Nov 23 05:05:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d4feed24-fd18-43f4-8007-428f7bbb28da, sub_name:f9268685-6943-4f1d-b505-be8ba848d173, vol_name:cephfs) < "" Nov 23 05:05:39 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f9268685-6943-4f1d-b505-be8ba848d173/.meta.tmp' Nov 23 05:05:39 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f9268685-6943-4f1d-b505-be8ba848d173/.meta.tmp' to config b'/volumes/_nogroup/f9268685-6943-4f1d-b505-be8ba848d173/.meta' Nov 23 05:05:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d4feed24-fd18-43f4-8007-428f7bbb28da, sub_name:f9268685-6943-4f1d-b505-be8ba848d173, vol_name:cephfs) < "" Nov 23 05:05:40 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "23db4718-314d-42ce-b54b-c4702b2fa362", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:05:40 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:05:41 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/.meta.tmp' Nov 23 05:05:41 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/.meta.tmp' to config b'/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/.meta' Nov 23 05:05:41 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:05:41 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "23db4718-314d-42ce-b54b-c4702b2fa362", "format": "json"}]: dispatch Nov 23 05:05:41 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:05:41 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:05:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:05:41 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:05:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e202 do_prune osdmap full prune enabled Nov 23 05:05:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e203 e203: 6 total, 6 up, 6 in Nov 23 05:05:41 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e203: 6 total, 6 up, 6 in Nov 23 05:05:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v428: 177 pgs: 4 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 170 active+clean; 193 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 26 KiB/s wr, 88 op/s Nov 23 05:05:41 localhost nova_compute[280939]: 2025-11-23 10:05:41.652 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:41 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "48839c19-f8e5-4cca-b5e6-993e217ef0c0_6b0cc57c-2629-485f-bd4c-419d37ac81ed", "force": true, "format": "json"}]: dispatch Nov 23 05:05:41 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:48839c19-f8e5-4cca-b5e6-993e217ef0c0_6b0cc57c-2629-485f-bd4c-419d37ac81ed, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:41 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' Nov 23 05:05:41 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta' Nov 23 05:05:41 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:48839c19-f8e5-4cca-b5e6-993e217ef0c0_6b0cc57c-2629-485f-bd4c-419d37ac81ed, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:41 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "48839c19-f8e5-4cca-b5e6-993e217ef0c0", "force": true, "format": "json"}]: dispatch Nov 23 05:05:41 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:48839c19-f8e5-4cca-b5e6-993e217ef0c0, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:41 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' Nov 23 05:05:41 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta' Nov 23 05:05:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:05:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:05:41 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:48839c19-f8e5-4cca-b5e6-993e217ef0c0, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:41 localhost podman[324105]: 2025-11-23 10:05:41.89803142 +0000 UTC m=+0.079740260 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:05:41 localhost podman[324105]: 2025-11-23 10:05:41.904291632 +0000 UTC m=+0.086000443 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:05:41 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:05:41 localhost podman[324104]: 2025-11-23 10:05:41.95122947 +0000 UTC m=+0.135581931 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:05:41 localhost podman[324104]: 2025-11-23 10:05:41.962327112 +0000 UTC m=+0.146679543 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:05:41 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:05:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e203 do_prune osdmap full prune enabled Nov 23 05:05:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e204 e204: 6 total, 6 up, 6 in Nov 23 05:05:42 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e204: 6 total, 6 up, 6 in Nov 23 05:05:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:05:42 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1406713589' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:05:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:05:42 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1406713589' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:05:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:05:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e204 do_prune osdmap full prune enabled Nov 23 05:05:43 localhost nova_compute[280939]: 2025-11-23 10:05:43.108 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e205 e205: 6 total, 6 up, 6 in Nov 23 05:05:43 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e205: 6 total, 6 up, 6 in Nov 23 05:05:43 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f9268685-6943-4f1d-b505-be8ba848d173", "format": "json"}]: dispatch Nov 23 05:05:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f9268685-6943-4f1d-b505-be8ba848d173, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f9268685-6943-4f1d-b505-be8ba848d173, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:43 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f9268685-6943-4f1d-b505-be8ba848d173' of type subvolume Nov 23 05:05:43 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:05:43.130+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f9268685-6943-4f1d-b505-be8ba848d173' of type subvolume Nov 23 05:05:43 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f9268685-6943-4f1d-b505-be8ba848d173", "force": true, "format": "json"}]: dispatch Nov 23 05:05:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f9268685-6943-4f1d-b505-be8ba848d173, vol_name:cephfs) < "" Nov 23 05:05:43 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f9268685-6943-4f1d-b505-be8ba848d173'' moved to trashcan Nov 23 05:05:43 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:05:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f9268685-6943-4f1d-b505-be8ba848d173, vol_name:cephfs) < "" Nov 23 05:05:43 localhost podman[324253]: 2025-11-23 10:05:43.266135555 +0000 UTC m=+0.109937662 container exec 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, distribution-scope=public, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_BRANCH=main, io.buildah.version=1.33.12, GIT_CLEAN=True, io.openshift.expose-services=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 23 05:05:43 localhost podman[324253]: 2025-11-23 10:05:43.394388659 +0000 UTC m=+0.238190736 container exec_died 5792fc048ed517dcf589ea3c6b1f537bf3249f51a09ef84a0963180e0a683eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532584, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, release=553, vcs-type=git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph) Nov 23 05:05:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v431: 177 pgs: 4 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 170 active+clean; 193 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 26 KiB/s wr, 89 op/s Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e205 do_prune osdmap full prune enabled Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e206 e206: 6 total, 6 up, 6 in Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e206: 6 total, 6 up, 6 in Nov 23 05:05:44 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:05:44 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:05:44 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:05:44 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:05:44 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:05:44 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:05:44 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "23db4718-314d-42ce-b54b-c4702b2fa362", "auth_id": "eve49", "tenant_id": "0dcc58766ad44287910595c5c8f14397", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:05:44 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, tenant_id:0dcc58766ad44287910595c5c8f14397, vol_name:cephfs) < "" Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Nov 23 05:05:44 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID eve49 with tenant 0dcc58766ad44287910595c5c8f14397 Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:05:44 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, tenant_id:0dcc58766ad44287910595c5c8f14397, vol_name:cephfs) < "" Nov 23 05:05:44 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "bdd7b00d-cad7-4915-8d2a-92e849730947_0d9276b9-0c8a-49a7-bc1f-6fadf5287cf8", "force": true, "format": "json"}]: dispatch Nov 23 05:05:44 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bdd7b00d-cad7-4915-8d2a-92e849730947_0d9276b9-0c8a-49a7-bc1f-6fadf5287cf8, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:44 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' Nov 23 05:05:44 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta' Nov 23 05:05:44 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bdd7b00d-cad7-4915-8d2a-92e849730947_0d9276b9-0c8a-49a7-bc1f-6fadf5287cf8, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:44 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "bdd7b00d-cad7-4915-8d2a-92e849730947", "force": true, "format": "json"}]: dispatch Nov 23 05:05:44 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bdd7b00d-cad7-4915-8d2a-92e849730947, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 23 05:05:44 localhost ceph-mgr[286671]: [cephadm INFO root] Adjusting osd_memory_target on np0005532584.localdomain to 836.6M Nov 23 05:05:44 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005532584.localdomain to 836.6M Nov 23 05:05:44 localhost ceph-mgr[286671]: [cephadm INFO root] Adjusting osd_memory_target on np0005532585.localdomain to 836.6M Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Nov 23 05:05:44 localhost ceph-mgr[286671]: [cephadm INFO root] Adjusting osd_memory_target on np0005532586.localdomain to 836.6M Nov 23 05:05:44 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005532585.localdomain to 836.6M Nov 23 05:05:44 localhost ceph-mgr[286671]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005532586.localdomain to 836.6M Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Nov 23 05:05:44 localhost ceph-mgr[286671]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 05:05:44 localhost ceph-mgr[286671]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Nov 23 05:05:44 localhost ceph-mgr[286671]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Nov 23 05:05:44 localhost ceph-mgr[286671]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Nov 23 05:05:44 localhost ceph-mgr[286671]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 05:05:44 localhost ceph-mgr[286671]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 05:05:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:05:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 05:05:45 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:05:45 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev 4d17014a-1523-48d1-b2f5-1f3960b5a0fc (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:05:45 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev 4d17014a-1523-48d1-b2f5-1f3960b5a0fc (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:05:45 localhost ceph-mgr[286671]: [progress INFO root] Completed event 4d17014a-1523-48d1-b2f5-1f3960b5a0fc (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 05:05:45 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 05:05:45 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 05:05:45 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' Nov 23 05:05:45 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta' Nov 23 05:05:45 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bdd7b00d-cad7-4915-8d2a-92e849730947, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:45 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Nov 23 05:05:45 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:05:45 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:05:45 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 23 05:05:45 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 23 05:05:45 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 23 05:05:45 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 23 05:05:45 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 23 05:05:45 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 23 05:05:45 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:05:45 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:05:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v433: 177 pgs: 177 active+clean; 193 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 139 KiB/s rd, 80 KiB/s wr, 196 op/s Nov 23 05:05:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:46.110 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:05:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:46.111 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 05:05:46 localhost nova_compute[280939]: 2025-11-23 10:05:46.142 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:46 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ef1efc74-972d-4b02-871d-8dd0c1290f0a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:05:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ef1efc74-972d-4b02-871d-8dd0c1290f0a, vol_name:cephfs) < "" Nov 23 05:05:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e206 do_prune osdmap full prune enabled Nov 23 05:05:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e207 e207: 6 total, 6 up, 6 in Nov 23 05:05:46 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532584.localdomain to 836.6M Nov 23 05:05:46 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532585.localdomain to 836.6M Nov 23 05:05:46 localhost ceph-mon[293353]: Adjusting osd_memory_target on np0005532586.localdomain to 836.6M Nov 23 05:05:46 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 05:05:46 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Nov 23 05:05:46 localhost ceph-mon[293353]: Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 23 05:05:46 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e207: 6 total, 6 up, 6 in Nov 23 05:05:46 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ef1efc74-972d-4b02-871d-8dd0c1290f0a/.meta.tmp' Nov 23 05:05:46 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ef1efc74-972d-4b02-871d-8dd0c1290f0a/.meta.tmp' to config b'/volumes/_nogroup/ef1efc74-972d-4b02-871d-8dd0c1290f0a/.meta' Nov 23 05:05:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ef1efc74-972d-4b02-871d-8dd0c1290f0a, vol_name:cephfs) < "" Nov 23 05:05:46 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ef1efc74-972d-4b02-871d-8dd0c1290f0a", "format": "json"}]: dispatch Nov 23 05:05:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ef1efc74-972d-4b02-871d-8dd0c1290f0a, vol_name:cephfs) < "" Nov 23 05:05:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ef1efc74-972d-4b02-871d-8dd0c1290f0a, vol_name:cephfs) < "" Nov 23 05:05:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:05:46 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:05:46 localhost nova_compute[280939]: 2025-11-23 10:05:46.653 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:47 localhost podman[239764]: time="2025-11-23T10:05:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:05:47 localhost podman[239764]: @ - - [23/Nov/2025:10:05:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:05:47 localhost ovn_metadata_agent[159410]: 2025-11-23 10:05:47.113 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 05:05:47 localhost podman[239764]: @ - - [23/Nov/2025:10:05:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18733 "" "Go-http-client/1.1" Nov 23 05:05:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e207 do_prune osdmap full prune enabled Nov 23 05:05:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e208 e208: 6 total, 6 up, 6 in Nov 23 05:05:47 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e208: 6 total, 6 up, 6 in Nov 23 05:05:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v436: 177 pgs: 177 active+clean; 193 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 131 KiB/s rd, 76 KiB/s wr, 184 op/s Nov 23 05:05:48 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "23db4718-314d-42ce-b54b-c4702b2fa362", "auth_id": "eve48", "tenant_id": "0dcc58766ad44287910595c5c8f14397", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:05:48 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, tenant_id:0dcc58766ad44287910595c5c8f14397, vol_name:cephfs) < "" Nov 23 05:05:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Nov 23 05:05:48 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Nov 23 05:05:48 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID eve48 with tenant 0dcc58766ad44287910595c5c8f14397 Nov 23 05:05:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:05:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e208 do_prune osdmap full prune enabled Nov 23 05:05:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e209 e209: 6 total, 6 up, 6 in Nov 23 05:05:48 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e209: 6 total, 6 up, 6 in Nov 23 05:05:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:05:48 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:05:48 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:05:48 localhost nova_compute[280939]: 2025-11-23 10:05:48.146 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:48 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, tenant_id:0dcc58766ad44287910595c5c8f14397, vol_name:cephfs) < "" Nov 23 05:05:48 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "faed27e0-c4c3-4299-836c-04ce6fc7a80d_d597c998-f063-4969-a445-a54fe8b5c401", "force": true, "format": "json"}]: dispatch Nov 23 05:05:48 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:faed27e0-c4c3-4299-836c-04ce6fc7a80d_d597c998-f063-4969-a445-a54fe8b5c401, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:48 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' Nov 23 05:05:48 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta' Nov 23 05:05:48 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:faed27e0-c4c3-4299-836c-04ce6fc7a80d_d597c998-f063-4969-a445-a54fe8b5c401, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:48 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "faed27e0-c4c3-4299-836c-04ce6fc7a80d", "force": true, "format": "json"}]: dispatch Nov 23 05:05:48 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:faed27e0-c4c3-4299-836c-04ce6fc7a80d, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:05:48 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/438476046' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:05:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:05:48 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/438476046' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:05:48 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' Nov 23 05:05:48 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta' Nov 23 05:05:48 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:faed27e0-c4c3-4299-836c-04ce6fc7a80d, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:48 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Nov 23 05:05:48 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:05:48 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:05:48 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 05:05:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 05:05:48 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:05:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:05:49 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3122516944' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:05:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:05:49 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3122516944' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:05:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v438: 177 pgs: 177 active+clean; 194 MiB data, 1003 MiB used, 41 GiB / 42 GiB avail; 181 KiB/s rd, 160 KiB/s wr, 262 op/s Nov 23 05:05:49 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:05:49 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ef1efc74-972d-4b02-871d-8dd0c1290f0a", "format": "json"}]: dispatch Nov 23 05:05:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ef1efc74-972d-4b02-871d-8dd0c1290f0a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ef1efc74-972d-4b02-871d-8dd0c1290f0a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:49 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:05:49.881+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ef1efc74-972d-4b02-871d-8dd0c1290f0a' of type subvolume Nov 23 05:05:49 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ef1efc74-972d-4b02-871d-8dd0c1290f0a' of type subvolume Nov 23 05:05:49 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ef1efc74-972d-4b02-871d-8dd0c1290f0a", "force": true, "format": "json"}]: dispatch Nov 23 05:05:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ef1efc74-972d-4b02-871d-8dd0c1290f0a, vol_name:cephfs) < "" Nov 23 05:05:49 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ef1efc74-972d-4b02-871d-8dd0c1290f0a'' moved to trashcan Nov 23 05:05:49 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:05:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ef1efc74-972d-4b02-871d-8dd0c1290f0a, vol_name:cephfs) < "" Nov 23 05:05:50 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "23a1953a-4602-4081-b659-88394c7eeb71", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:05:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:23a1953a-4602-4081-b659-88394c7eeb71, vol_name:cephfs) < "" Nov 23 05:05:50 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/23a1953a-4602-4081-b659-88394c7eeb71/.meta.tmp' Nov 23 05:05:50 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/23a1953a-4602-4081-b659-88394c7eeb71/.meta.tmp' to config b'/volumes/_nogroup/23a1953a-4602-4081-b659-88394c7eeb71/.meta' Nov 23 05:05:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:23a1953a-4602-4081-b659-88394c7eeb71, vol_name:cephfs) < "" Nov 23 05:05:50 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "23a1953a-4602-4081-b659-88394c7eeb71", "format": "json"}]: dispatch Nov 23 05:05:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:23a1953a-4602-4081-b659-88394c7eeb71, vol_name:cephfs) < "" Nov 23 05:05:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:23a1953a-4602-4081-b659-88394c7eeb71, vol_name:cephfs) < "" Nov 23 05:05:50 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:05:50 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:05:51 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "b6b84a30-76d0-4c7a-ba18-7c9a32f848c3_9e4afb1c-09af-4f97-babc-10dde86a6298", "force": true, "format": "json"}]: dispatch Nov 23 05:05:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b6b84a30-76d0-4c7a-ba18-7c9a32f848c3_9e4afb1c-09af-4f97-babc-10dde86a6298, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' Nov 23 05:05:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta' Nov 23 05:05:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b6b84a30-76d0-4c7a-ba18-7c9a32f848c3_9e4afb1c-09af-4f97-babc-10dde86a6298, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:51 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "b6b84a30-76d0-4c7a-ba18-7c9a32f848c3", "force": true, "format": "json"}]: dispatch Nov 23 05:05:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b6b84a30-76d0-4c7a-ba18-7c9a32f848c3, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' Nov 23 05:05:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta' Nov 23 05:05:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b6b84a30-76d0-4c7a-ba18-7c9a32f848c3, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v439: 177 pgs: 177 active+clean; 194 MiB data, 1003 MiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 88 KiB/s wr, 100 op/s Nov 23 05:05:51 localhost nova_compute[280939]: 2025-11-23 10:05:51.705 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:51 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "23db4718-314d-42ce-b54b-c4702b2fa362", "auth_id": "eve48", "format": "json"}]: dispatch Nov 23 05:05:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:05:51 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Nov 23 05:05:51 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Nov 23 05:05:51 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0) Nov 23 05:05:51 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Nov 23 05:05:51 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Nov 23 05:05:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:05:51 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "23db4718-314d-42ce-b54b-c4702b2fa362", "auth_id": "eve48", "format": "json"}]: dispatch Nov 23 05:05:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:05:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80 Nov 23 05:05:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:05:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:05:52 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Nov 23 05:05:52 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Nov 23 05:05:52 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Nov 23 05:05:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:05:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e209 do_prune osdmap full prune enabled Nov 23 05:05:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e210 e210: 6 total, 6 up, 6 in Nov 23 05:05:53 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e210: 6 total, 6 up, 6 in Nov 23 05:05:53 localhost nova_compute[280939]: 2025-11-23 10:05:53.176 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:53 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c801417a-e8bf-4ab6-8efa-c611d6d1d111", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:05:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c801417a-e8bf-4ab6-8efa-c611d6d1d111, vol_name:cephfs) < "" Nov 23 05:05:53 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111/.meta.tmp' Nov 23 05:05:53 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111/.meta.tmp' to config b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111/.meta' Nov 23 05:05:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c801417a-e8bf-4ab6-8efa-c611d6d1d111, vol_name:cephfs) < "" Nov 23 05:05:53 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c801417a-e8bf-4ab6-8efa-c611d6d1d111", "format": "json"}]: dispatch Nov 23 05:05:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c801417a-e8bf-4ab6-8efa-c611d6d1d111, vol_name:cephfs) < "" Nov 23 05:05:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c801417a-e8bf-4ab6-8efa-c611d6d1d111, vol_name:cephfs) < "" Nov 23 05:05:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:05:53 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:05:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:05:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:05:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:05:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:05:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:05:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Nov 23 05:05:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 23 05:05:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 23 05:05:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v441: 177 pgs: 177 active+clean; 194 MiB data, 1003 MiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 88 KiB/s wr, 99 op/s Nov 23 05:05:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:05:53 localhost systemd[1]: tmp-crun.7OGweb.mount: Deactivated successfully. Nov 23 05:05:53 localhost podman[324459]: 2025-11-23 10:05:53.910151745 +0000 UTC m=+0.095035528 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0) Nov 23 05:05:53 localhost podman[324459]: 2025-11-23 10:05:53.91939115 +0000 UTC m=+0.104274983 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 23 05:05:53 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:05:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e210 do_prune osdmap full prune enabled Nov 23 05:05:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e211 e211: 6 total, 6 up, 6 in Nov 23 05:05:54 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e211: 6 total, 6 up, 6 in Nov 23 05:05:54 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e48: np0005532584.naxwxy(active, since 10m), standbys: np0005532585.gzafiw, np0005532586.thmvqb Nov 23 05:05:54 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "73f8c29f-f829-4d6f-b0a1-a020eee73bb5", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:05:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:73f8c29f-f829-4d6f-b0a1-a020eee73bb5, vol_name:cephfs) < "" Nov 23 05:05:54 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/73f8c29f-f829-4d6f-b0a1-a020eee73bb5/.meta.tmp' Nov 23 05:05:54 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/73f8c29f-f829-4d6f-b0a1-a020eee73bb5/.meta.tmp' to config b'/volumes/_nogroup/73f8c29f-f829-4d6f-b0a1-a020eee73bb5/.meta' Nov 23 05:05:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:73f8c29f-f829-4d6f-b0a1-a020eee73bb5, vol_name:cephfs) < "" Nov 23 05:05:54 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "73f8c29f-f829-4d6f-b0a1-a020eee73bb5", "format": "json"}]: dispatch Nov 23 05:05:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:73f8c29f-f829-4d6f-b0a1-a020eee73bb5, vol_name:cephfs) < "" Nov 23 05:05:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:73f8c29f-f829-4d6f-b0a1-a020eee73bb5, vol_name:cephfs) < "" Nov 23 05:05:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:05:54 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:05:54 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "89bb6ad8-e75c-4c65-a495-a2d153e513e0_72e16638-ea70-4008-a6f5-5d7b1af051d5", "force": true, "format": "json"}]: dispatch Nov 23 05:05:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:89bb6ad8-e75c-4c65-a495-a2d153e513e0_72e16638-ea70-4008-a6f5-5d7b1af051d5, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:54 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' Nov 23 05:05:54 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta' Nov 23 05:05:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:89bb6ad8-e75c-4c65-a495-a2d153e513e0_72e16638-ea70-4008-a6f5-5d7b1af051d5, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:54 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "snap_name": "89bb6ad8-e75c-4c65-a495-a2d153e513e0", "force": true, "format": "json"}]: dispatch Nov 23 05:05:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:89bb6ad8-e75c-4c65-a495-a2d153e513e0, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:55 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' Nov 23 05:05:55 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta.tmp' to config b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68/.meta' Nov 23 05:05:55 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:89bb6ad8-e75c-4c65-a495-a2d153e513e0, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:55 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "23db4718-314d-42ce-b54b-c4702b2fa362", "auth_id": "eve47", "tenant_id": "0dcc58766ad44287910595c5c8f14397", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:05:55 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, tenant_id:0dcc58766ad44287910595c5c8f14397, vol_name:cephfs) < "" Nov 23 05:05:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Nov 23 05:05:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Nov 23 05:05:55 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID eve47 with tenant 0dcc58766ad44287910595c5c8f14397 Nov 23 05:05:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:05:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:05:55 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:05:55 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, tenant_id:0dcc58766ad44287910595c5c8f14397, vol_name:cephfs) < "" Nov 23 05:05:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v443: 177 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 173 active+clean; 194 MiB data, 1004 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 131 KiB/s wr, 90 op/s Nov 23 05:05:56 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Nov 23 05:05:56 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:05:56 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:05:56 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c801417a-e8bf-4ab6-8efa-c611d6d1d111", "snap_name": "b2c90996-bd7d-4844-bb02-0209ec15d89a", "format": "json"}]: dispatch Nov 23 05:05:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b2c90996-bd7d-4844-bb02-0209ec15d89a, sub_name:c801417a-e8bf-4ab6-8efa-c611d6d1d111, vol_name:cephfs) < "" Nov 23 05:05:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b2c90996-bd7d-4844-bb02-0209ec15d89a, sub_name:c801417a-e8bf-4ab6-8efa-c611d6d1d111, vol_name:cephfs) < "" Nov 23 05:05:56 localhost nova_compute[280939]: 2025-11-23 10:05:56.730 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v444: 177 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 173 active+clean; 194 MiB data, 1004 MiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 55 KiB/s wr, 8 op/s Nov 23 05:05:57 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "23a1953a-4602-4081-b659-88394c7eeb71", "format": "json"}]: dispatch Nov 23 05:05:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:23a1953a-4602-4081-b659-88394c7eeb71, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:23a1953a-4602-4081-b659-88394c7eeb71, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:57 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:05:57.609+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '23a1953a-4602-4081-b659-88394c7eeb71' of type subvolume Nov 23 05:05:57 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '23a1953a-4602-4081-b659-88394c7eeb71' of type subvolume Nov 23 05:05:57 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "23a1953a-4602-4081-b659-88394c7eeb71", "force": true, "format": "json"}]: dispatch Nov 23 05:05:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:23a1953a-4602-4081-b659-88394c7eeb71, vol_name:cephfs) < "" Nov 23 05:05:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/23a1953a-4602-4081-b659-88394c7eeb71'' moved to trashcan Nov 23 05:05:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:05:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:23a1953a-4602-4081-b659-88394c7eeb71, vol_name:cephfs) < "" Nov 23 05:05:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:05:58 localhost nova_compute[280939]: 2025-11-23 10:05:58.216 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:05:58 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "format": "json"}]: dispatch Nov 23 05:05:58 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:58 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:05:58 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:05:58.255+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '307a0389-800d-4c9e-95ec-9b17f1b7da68' of type subvolume Nov 23 05:05:58 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '307a0389-800d-4c9e-95ec-9b17f1b7da68' of type subvolume Nov 23 05:05:58 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "307a0389-800d-4c9e-95ec-9b17f1b7da68", "force": true, "format": "json"}]: dispatch Nov 23 05:05:58 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:58 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/307a0389-800d-4c9e-95ec-9b17f1b7da68'' moved to trashcan Nov 23 05:05:58 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:05:58 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:307a0389-800d-4c9e-95ec-9b17f1b7da68, vol_name:cephfs) < "" Nov 23 05:05:58 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "23db4718-314d-42ce-b54b-c4702b2fa362", "auth_id": "eve47", "format": "json"}]: dispatch Nov 23 05:05:58 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:05:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Nov 23 05:05:58 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Nov 23 05:05:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0) Nov 23 05:05:58 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Nov 23 05:05:58 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Nov 23 05:05:58 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:05:58 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "23db4718-314d-42ce-b54b-c4702b2fa362", "auth_id": "eve47", "format": "json"}]: dispatch Nov 23 05:05:58 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:05:58 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80 Nov 23 05:05:58 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:05:58 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:05:59 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e211 do_prune osdmap full prune enabled Nov 23 05:05:59 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e212 e212: 6 total, 6 up, 6 in Nov 23 05:05:59 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e212: 6 total, 6 up, 6 in Nov 23 05:05:59 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Nov 23 05:05:59 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Nov 23 05:05:59 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Nov 23 05:05:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v446: 177 pgs: 177 active+clean; 195 MiB data, 1006 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 172 KiB/s wr, 49 op/s Nov 23 05:05:59 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "c801417a-e8bf-4ab6-8efa-c611d6d1d111", "snap_name": "b2c90996-bd7d-4844-bb02-0209ec15d89a", "target_sub_name": "b2bf9dde-7e11-4be3-ba03-328acaaba40c", "format": "json"}]: dispatch Nov 23 05:05:59 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:b2c90996-bd7d-4844-bb02-0209ec15d89a, sub_name:c801417a-e8bf-4ab6-8efa-c611d6d1d111, target_sub_name:b2bf9dde-7e11-4be3-ba03-328acaaba40c, vol_name:cephfs) < "" Nov 23 05:06:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/b2bf9dde-7e11-4be3-ba03-328acaaba40c/.meta.tmp' Nov 23 05:06:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b2bf9dde-7e11-4be3-ba03-328acaaba40c/.meta.tmp' to config b'/volumes/_nogroup/b2bf9dde-7e11-4be3-ba03-328acaaba40c/.meta' Nov 23 05:06:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.clone_index] tracking-id c178d57e-3e96-41b8-bd33-040aaf759e91 for path b'/volumes/_nogroup/b2bf9dde-7e11-4be3-ba03-328acaaba40c' Nov 23 05:06:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111/.meta.tmp' Nov 23 05:06:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111/.meta.tmp' to config b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111/.meta' Nov 23 05:06:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:06:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:b2c90996-bd7d-4844-bb02-0209ec15d89a, sub_name:c801417a-e8bf-4ab6-8efa-c611d6d1d111, target_sub_name:b2bf9dde-7e11-4be3-ba03-328acaaba40c, vol_name:cephfs) < "" Nov 23 05:06:00 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b2bf9dde-7e11-4be3-ba03-328acaaba40c", "format": "json"}]: dispatch Nov 23 05:06:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b2bf9dde-7e11-4be3-ba03-328acaaba40c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:00.093+0000 7f9cccb12640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:00.093+0000 7f9cccb12640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:00.093+0000 7f9cccb12640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:00.093+0000 7f9cccb12640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:00.093+0000 7f9cccb12640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b2bf9dde-7e11-4be3-ba03-328acaaba40c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/b2bf9dde-7e11-4be3-ba03-328acaaba40c Nov 23 05:06:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, b2bf9dde-7e11-4be3-ba03-328acaaba40c) Nov 23 05:06:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:00.126+0000 7f9ccdb14640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:00.126+0000 7f9ccdb14640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:00.126+0000 7f9ccdb14640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:00.126+0000 7f9ccdb14640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:00.126+0000 7f9ccdb14640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:06:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, b2bf9dde-7e11-4be3-ba03-328acaaba40c) -- by 0 seconds Nov 23 05:06:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/b2bf9dde-7e11-4be3-ba03-328acaaba40c/.meta.tmp' Nov 23 05:06:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b2bf9dde-7e11-4be3-ba03-328acaaba40c/.meta.tmp' to config b'/volumes/_nogroup/b2bf9dde-7e11-4be3-ba03-328acaaba40c/.meta' Nov 23 05:06:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:06:00 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1171159827' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:06:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:06:00 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1171159827' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:06:00 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "73f8c29f-f829-4d6f-b0a1-a020eee73bb5", "format": "json"}]: dispatch Nov 23 05:06:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:73f8c29f-f829-4d6f-b0a1-a020eee73bb5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v447: 177 pgs: 177 active+clean; 195 MiB data, 1006 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 137 KiB/s wr, 39 op/s Nov 23 05:06:01 localhost nova_compute[280939]: 2025-11-23 10:06:01.769 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:06:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:06:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:06:01 localhost podman[324504]: 2025-11-23 10:06:01.897944769 +0000 UTC m=+0.078449117 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:06:01 localhost podman[324504]: 2025-11-23 10:06:01.910316511 +0000 UTC m=+0.090820849 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:06:01 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:06:01 localhost podman[324505]: 2025-11-23 10:06:01.958959329 +0000 UTC m=+0.135618308 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:06:02 localhost podman[324505]: 2025-11-23 10:06:02.020478034 +0000 UTC m=+0.197137003 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Nov 23 05:06:02 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:06:02 localhost podman[324503]: 2025-11-23 10:06:02.024069506 +0000 UTC m=+0.207553636 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 05:06:02 localhost podman[324503]: 2025-11-23 10:06:02.108549358 +0000 UTC m=+0.292033498 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd) Nov 23 05:06:02 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:06:02 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e49: np0005532584.naxwxy(active, since 10m), standbys: np0005532585.gzafiw, np0005532586.thmvqb Nov 23 05:06:02 localhost systemd[1]: tmp-crun.IclVV2.mount: Deactivated successfully. Nov 23 05:06:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:06:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e212 do_prune osdmap full prune enabled Nov 23 05:06:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e213 e213: 6 total, 6 up, 6 in Nov 23 05:06:03 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e213: 6 total, 6 up, 6 in Nov 23 05:06:03 localhost nova_compute[280939]: 2025-11-23 10:06:03.244 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:73f8c29f-f829-4d6f-b0a1-a020eee73bb5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:03 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:03.323+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '73f8c29f-f829-4d6f-b0a1-a020eee73bb5' of type subvolume Nov 23 05:06:03 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '73f8c29f-f829-4d6f-b0a1-a020eee73bb5' of type subvolume Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111/.snap/b2c90996-bd7d-4844-bb02-0209ec15d89a/c06fad84-7b4b-40de-9caa-34bb11443084' to b'/volumes/_nogroup/b2bf9dde-7e11-4be3-ba03-328acaaba40c/b696aba3-0429-4b6e-b17c-0aac3d0a1493' Nov 23 05:06:03 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "73f8c29f-f829-4d6f-b0a1-a020eee73bb5", "force": true, "format": "json"}]: dispatch Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:73f8c29f-f829-4d6f-b0a1-a020eee73bb5, vol_name:cephfs) < "" Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/73f8c29f-f829-4d6f-b0a1-a020eee73bb5'' moved to trashcan Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:73f8c29f-f829-4d6f-b0a1-a020eee73bb5, vol_name:cephfs) < "" Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/b2bf9dde-7e11-4be3-ba03-328acaaba40c/.meta.tmp' Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b2bf9dde-7e11-4be3-ba03-328acaaba40c/.meta.tmp' to config b'/volumes/_nogroup/b2bf9dde-7e11-4be3-ba03-328acaaba40c/.meta' Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.clone_index] untracking c178d57e-3e96-41b8-bd33-040aaf759e91 Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111/.meta.tmp' Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111/.meta.tmp' to config b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111/.meta' Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/b2bf9dde-7e11-4be3-ba03-328acaaba40c/.meta.tmp' Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b2bf9dde-7e11-4be3-ba03-328acaaba40c/.meta.tmp' to config b'/volumes/_nogroup/b2bf9dde-7e11-4be3-ba03-328acaaba40c/.meta' Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, b2bf9dde-7e11-4be3-ba03-328acaaba40c) Nov 23 05:06:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v449: 177 pgs: 177 active+clean; 195 MiB data, 1006 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 83 KiB/s wr, 30 op/s Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0. Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:06:03.540641) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58 Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892363540678, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2627, "num_deletes": 276, "total_data_size": 3618580, "memory_usage": 3763808, "flush_reason": "Manual Compaction"} Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892363556716, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 3552406, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30854, "largest_seqno": 33480, "table_properties": {"data_size": 3540546, "index_size": 7725, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27806, "raw_average_key_size": 22, "raw_value_size": 3515998, "raw_average_value_size": 2879, "num_data_blocks": 323, "num_entries": 1221, "num_filter_entries": 1221, "num_deletions": 276, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892236, "oldest_key_time": 1763892236, "file_creation_time": 1763892363, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}} Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 16127 microseconds, and 7850 cpu microseconds. Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:06:03.556767) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 3552406 bytes OK Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:06:03.556791) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:06:03.558459) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:06:03.558483) EVENT_LOG_v1 {"time_micros": 1763892363558474, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:06:03.558506) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3606863, prev total WAL file size 3606863, number of live WAL files 2. Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:06:03.559719) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end) Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(3469KB)], [57(14MB)] Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892363559773, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 19054841, "oldest_snapshot_seqno": -1} Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 13246 keys, 17876663 bytes, temperature: kUnknown Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892363641049, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 17876663, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17800939, "index_size": 41511, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33157, "raw_key_size": 356707, "raw_average_key_size": 26, "raw_value_size": 17575126, "raw_average_value_size": 1326, "num_data_blocks": 1553, "num_entries": 13246, "num_filter_entries": 13246, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763892363, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}} Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:06:03.641338) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 17876663 bytes Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:06:03.643706) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.1 rd, 219.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 14.8 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(10.4) write-amplify(5.0) OK, records in: 13811, records dropped: 565 output_compression: NoCompression Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:06:03.643735) EVENT_LOG_v1 {"time_micros": 1763892363643723, "job": 34, "event": "compaction_finished", "compaction_time_micros": 81383, "compaction_time_cpu_micros": 49746, "output_level": 6, "num_output_files": 1, "total_output_size": 17876663, "num_input_records": 13811, "num_output_records": 13246, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892363644388, "job": 34, "event": "table_file_deletion", "file_number": 59} Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892363646490, "job": 34, "event": "table_file_deletion", "file_number": 57} Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:06:03.559606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:06:03.646564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:06:03.646572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:06:03.646575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:06:03.646577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:06:03 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:06:03.646581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:06:03 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "23db4718-314d-42ce-b54b-c4702b2fa362", "auth_id": "eve49", "format": "json"}]: dispatch Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:06:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Nov 23 05:06:03 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Nov 23 05:06:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0) Nov 23 05:06:03 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Nov 23 05:06:03 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:06:03 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "23db4718-314d-42ce-b54b-c4702b2fa362", "auth_id": "eve49", "format": "json"}]: dispatch Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80 Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:06:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:06:04 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "23db4718-314d-42ce-b54b-c4702b2fa362", "format": "json"}]: dispatch Nov 23 05:06:04 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:23db4718-314d-42ce-b54b-c4702b2fa362, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:04 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:23db4718-314d-42ce-b54b-c4702b2fa362, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:04 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:04.057+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '23db4718-314d-42ce-b54b-c4702b2fa362' of type subvolume Nov 23 05:06:04 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '23db4718-314d-42ce-b54b-c4702b2fa362' of type subvolume Nov 23 05:06:04 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "23db4718-314d-42ce-b54b-c4702b2fa362", "force": true, "format": "json"}]: dispatch Nov 23 05:06:04 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:06:04 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362'' moved to trashcan Nov 23 05:06:04 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:06:04 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:23db4718-314d-42ce-b54b-c4702b2fa362, vol_name:cephfs) < "" Nov 23 05:06:04 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Nov 23 05:06:04 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Nov 23 05:06:04 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Nov 23 05:06:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v450: 177 pgs: 177 active+clean; 195 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 155 KiB/s wr, 62 op/s Nov 23 05:06:06 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:06:06 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1289491411' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:06:06 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:06:06 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1289491411' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:06:06 localhost ovn_controller[153771]: 2025-11-23T10:06:06Z|00335|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Nov 23 05:06:06 localhost openstack_network_exporter[241732]: ERROR 10:06:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:06:06 localhost openstack_network_exporter[241732]: ERROR 10:06:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:06:06 localhost openstack_network_exporter[241732]: ERROR 10:06:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:06:06 localhost openstack_network_exporter[241732]: ERROR 10:06:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:06:06 localhost openstack_network_exporter[241732]: Nov 23 05:06:06 localhost openstack_network_exporter[241732]: ERROR 10:06:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:06:06 localhost openstack_network_exporter[241732]: Nov 23 05:06:06 localhost nova_compute[280939]: 2025-11-23 10:06:06.770 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v451: 177 pgs: 177 active+clean; 195 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 148 KiB/s wr, 60 op/s Nov 23 05:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:06:07 localhost podman[324567]: 2025-11-23 10:06:07.89393386 +0000 UTC m=+0.078109327 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.6) Nov 23 05:06:07 localhost podman[324567]: 2025-11-23 10:06:07.905719853 +0000 UTC m=+0.089895320 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, version=9.6, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter) Nov 23 05:06:07 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:06:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:06:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e213 do_prune osdmap full prune enabled Nov 23 05:06:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e214 e214: 6 total, 6 up, 6 in Nov 23 05:06:08 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e214: 6 total, 6 up, 6 in Nov 23 05:06:08 localhost nova_compute[280939]: 2025-11-23 10:06:08.274 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:09 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dfbe99c1-496c-4355-8746-6391072f4d2c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:06:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dfbe99c1-496c-4355-8746-6391072f4d2c, vol_name:cephfs) < "" Nov 23 05:06:09 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dfbe99c1-496c-4355-8746-6391072f4d2c/.meta.tmp' Nov 23 05:06:09 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dfbe99c1-496c-4355-8746-6391072f4d2c/.meta.tmp' to config b'/volumes/_nogroup/dfbe99c1-496c-4355-8746-6391072f4d2c/.meta' Nov 23 05:06:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dfbe99c1-496c-4355-8746-6391072f4d2c, vol_name:cephfs) < "" Nov 23 05:06:09 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dfbe99c1-496c-4355-8746-6391072f4d2c", "format": "json"}]: dispatch Nov 23 05:06:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dfbe99c1-496c-4355-8746-6391072f4d2c, vol_name:cephfs) < "" Nov 23 05:06:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dfbe99c1-496c-4355-8746-6391072f4d2c, vol_name:cephfs) < "" Nov 23 05:06:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:06:09 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:06:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v453: 177 pgs: 177 active+clean; 195 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 98 KiB/s wr, 99 op/s Nov 23 05:06:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:06:09.746 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:06:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:06:09.746 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:06:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:06:09.746 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:06:10 localhost nova_compute[280939]: 2025-11-23 10:06:10.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:06:11 localhost nova_compute[280939]: 2025-11-23 10:06:11.134 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:06:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v454: 177 pgs: 177 active+clean; 195 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 93 KiB/s wr, 95 op/s Nov 23 05:06:11 localhost nova_compute[280939]: 2025-11-23 10:06:11.797 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:12 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a415431f-f058-44aa-a0b3-ca2c8202775f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:06:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a415431f-f058-44aa-a0b3-ca2c8202775f, vol_name:cephfs) < "" Nov 23 05:06:12 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a415431f-f058-44aa-a0b3-ca2c8202775f/.meta.tmp' Nov 23 05:06:12 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a415431f-f058-44aa-a0b3-ca2c8202775f/.meta.tmp' to config b'/volumes/_nogroup/a415431f-f058-44aa-a0b3-ca2c8202775f/.meta' Nov 23 05:06:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:06:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a415431f-f058-44aa-a0b3-ca2c8202775f, vol_name:cephfs) < "" Nov 23 05:06:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:06:12 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a415431f-f058-44aa-a0b3-ca2c8202775f", "format": "json"}]: dispatch Nov 23 05:06:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a415431f-f058-44aa-a0b3-ca2c8202775f, vol_name:cephfs) < "" Nov 23 05:06:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a415431f-f058-44aa-a0b3-ca2c8202775f, vol_name:cephfs) < "" Nov 23 05:06:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:06:12 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:06:12 localhost podman[324586]: 2025-11-23 10:06:12.914340685 +0000 UTC m=+0.096715491 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 05:06:12 localhost podman[324586]: 2025-11-23 10:06:12.925404586 +0000 UTC m=+0.107779382 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Nov 23 05:06:12 localhost podman[324587]: 2025-11-23 10:06:12.961759176 +0000 UTC m=+0.141822980 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:06:12 localhost podman[324587]: 2025-11-23 10:06:12.969305719 +0000 UTC m=+0.149369513 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 05:06:12 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:06:12 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:06:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:06:13 localhost nova_compute[280939]: 2025-11-23 10:06:13.303 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v455: 177 pgs: 177 active+clean; 195 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 78 KiB/s wr, 79 op/s Nov 23 05:06:14 localhost nova_compute[280939]: 2025-11-23 10:06:14.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:06:14 localhost nova_compute[280939]: 2025-11-23 10:06:14.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 05:06:14 localhost nova_compute[280939]: 2025-11-23 10:06:14.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 05:06:14 localhost nova_compute[280939]: 2025-11-23 10:06:14.145 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 05:06:14 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b0303be3-5e23-424d-935b-a23f10085dfe", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:06:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b0303be3-5e23-424d-935b-a23f10085dfe, vol_name:cephfs) < "" Nov 23 05:06:14 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b0303be3-5e23-424d-935b-a23f10085dfe/.meta.tmp' Nov 23 05:06:14 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b0303be3-5e23-424d-935b-a23f10085dfe/.meta.tmp' to config b'/volumes/_nogroup/b0303be3-5e23-424d-935b-a23f10085dfe/.meta' Nov 23 05:06:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b0303be3-5e23-424d-935b-a23f10085dfe, vol_name:cephfs) < "" Nov 23 05:06:14 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b0303be3-5e23-424d-935b-a23f10085dfe", "format": "json"}]: dispatch Nov 23 05:06:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b0303be3-5e23-424d-935b-a23f10085dfe, vol_name:cephfs) < "" Nov 23 05:06:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b0303be3-5e23-424d-935b-a23f10085dfe, vol_name:cephfs) < "" Nov 23 05:06:14 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:06:14 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:06:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v456: 177 pgs: 177 active+clean; 195 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 29 KiB/s wr, 55 op/s Nov 23 05:06:15 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b2bf9dde-7e11-4be3-ba03-328acaaba40c", "format": "json"}]: dispatch Nov 23 05:06:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b2bf9dde-7e11-4be3-ba03-328acaaba40c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:16 localhost nova_compute[280939]: 2025-11-23 10:06:16.838 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:06:17 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/439873091' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:06:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:06:17 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/439873091' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:06:17 localhost podman[239764]: time="2025-11-23T10:06:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:06:17 localhost podman[239764]: @ - - [23/Nov/2025:10:06:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:06:17 localhost nova_compute[280939]: 2025-11-23 10:06:17.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:06:17 localhost podman[239764]: @ - - [23/Nov/2025:10:06:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18735 "" "Go-http-client/1.1" Nov 23 05:06:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v457: 177 pgs: 177 active+clean; 195 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 29 KiB/s wr, 55 op/s Nov 23 05:06:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:06:18 localhost nova_compute[280939]: 2025-11-23 10:06:18.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:06:18 localhost nova_compute[280939]: 2025-11-23 10:06:18.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 05:06:18 localhost nova_compute[280939]: 2025-11-23 10:06:18.328 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b2bf9dde-7e11-4be3-ba03-328acaaba40c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:18 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b2bf9dde-7e11-4be3-ba03-328acaaba40c", "format": "json"}]: dispatch Nov 23 05:06:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b2bf9dde-7e11-4be3-ba03-328acaaba40c, vol_name:cephfs) < "" Nov 23 05:06:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b2bf9dde-7e11-4be3-ba03-328acaaba40c, vol_name:cephfs) < "" Nov 23 05:06:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:06:18 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:06:18 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a415431f-f058-44aa-a0b3-ca2c8202775f", "snap_name": "fa219110-ac84-4caf-a8ed-b4a2487e0ae3", "format": "json"}]: dispatch Nov 23 05:06:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fa219110-ac84-4caf-a8ed-b4a2487e0ae3, sub_name:a415431f-f058-44aa-a0b3-ca2c8202775f, vol_name:cephfs) < "" Nov 23 05:06:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fa219110-ac84-4caf-a8ed-b4a2487e0ae3, sub_name:a415431f-f058-44aa-a0b3-ca2c8202775f, vol_name:cephfs) < "" Nov 23 05:06:18 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b0303be3-5e23-424d-935b-a23f10085dfe", "auth_id": "tempest-cephx-id-710186636", "tenant_id": "ac0f52e853194f9ea9d27590165c645d", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:06:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-710186636, format:json, prefix:fs subvolume authorize, sub_name:b0303be3-5e23-424d-935b-a23f10085dfe, tenant_id:ac0f52e853194f9ea9d27590165c645d, vol_name:cephfs) < "" Nov 23 05:06:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-710186636", "format": "json"} v 0) Nov 23 05:06:18 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-710186636", "format": "json"} : dispatch Nov 23 05:06:18 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID tempest-cephx-id-710186636 with tenant ac0f52e853194f9ea9d27590165c645d Nov 23 05:06:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-710186636", "caps": ["mds", "allow rw path=/volumes/_nogroup/b0303be3-5e23-424d-935b-a23f10085dfe/5ae76225-892a-444c-90ca-4662bf6a5b64", "osd", "allow rw pool=manila_data namespace=fsvolumens_b0303be3-5e23-424d-935b-a23f10085dfe", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:06:18 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-710186636", "caps": ["mds", "allow rw path=/volumes/_nogroup/b0303be3-5e23-424d-935b-a23f10085dfe/5ae76225-892a-444c-90ca-4662bf6a5b64", "osd", "allow rw pool=manila_data namespace=fsvolumens_b0303be3-5e23-424d-935b-a23f10085dfe", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:06:18 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-710186636", "caps": ["mds", "allow rw path=/volumes/_nogroup/b0303be3-5e23-424d-935b-a23f10085dfe/5ae76225-892a-444c-90ca-4662bf6a5b64", "osd", "allow rw pool=manila_data namespace=fsvolumens_b0303be3-5e23-424d-935b-a23f10085dfe", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:06:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-710186636, format:json, prefix:fs subvolume authorize, sub_name:b0303be3-5e23-424d-935b-a23f10085dfe, tenant_id:ac0f52e853194f9ea9d27590165c645d, vol_name:cephfs) < "" Nov 23 05:06:19 localhost nova_compute[280939]: 2025-11-23 10:06:19.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:06:19 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b0303be3-5e23-424d-935b-a23f10085dfe", "auth_id": "tempest-cephx-id-710186636", "format": "json"}]: dispatch Nov 23 05:06:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-710186636, format:json, prefix:fs subvolume deauthorize, sub_name:b0303be3-5e23-424d-935b-a23f10085dfe, vol_name:cephfs) < "" Nov 23 05:06:19 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-710186636", "format": "json"} v 0) Nov 23 05:06:19 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-710186636", "format": "json"} : dispatch Nov 23 05:06:19 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-710186636"} v 0) Nov 23 05:06:19 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-710186636"} : dispatch Nov 23 05:06:19 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-710186636"}]': finished Nov 23 05:06:19 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-710186636", "format": "json"} : dispatch Nov 23 05:06:19 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-710186636", "caps": ["mds", "allow rw path=/volumes/_nogroup/b0303be3-5e23-424d-935b-a23f10085dfe/5ae76225-892a-444c-90ca-4662bf6a5b64", "osd", "allow rw pool=manila_data namespace=fsvolumens_b0303be3-5e23-424d-935b-a23f10085dfe", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:06:19 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-710186636", "caps": ["mds", "allow rw path=/volumes/_nogroup/b0303be3-5e23-424d-935b-a23f10085dfe/5ae76225-892a-444c-90ca-4662bf6a5b64", "osd", "allow rw pool=manila_data namespace=fsvolumens_b0303be3-5e23-424d-935b-a23f10085dfe", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:06:19 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-710186636", "format": "json"} : dispatch Nov 23 05:06:19 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-710186636"} : dispatch Nov 23 05:06:19 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-710186636"}]': finished Nov 23 05:06:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-710186636, format:json, prefix:fs subvolume deauthorize, sub_name:b0303be3-5e23-424d-935b-a23f10085dfe, vol_name:cephfs) < "" Nov 23 05:06:19 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b0303be3-5e23-424d-935b-a23f10085dfe", "auth_id": "tempest-cephx-id-710186636", "format": "json"}]: dispatch Nov 23 05:06:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-710186636, format:json, prefix:fs subvolume evict, sub_name:b0303be3-5e23-424d-935b-a23f10085dfe, vol_name:cephfs) < "" Nov 23 05:06:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-710186636, client_metadata.root=/volumes/_nogroup/b0303be3-5e23-424d-935b-a23f10085dfe/5ae76225-892a-444c-90ca-4662bf6a5b64 Nov 23 05:06:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:06:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-710186636, format:json, prefix:fs subvolume evict, sub_name:b0303be3-5e23-424d-935b-a23f10085dfe, vol_name:cephfs) < "" Nov 23 05:06:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v458: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 62 KiB/s wr, 87 op/s Nov 23 05:06:19 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b0303be3-5e23-424d-935b-a23f10085dfe", "format": "json"}]: dispatch Nov 23 05:06:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b0303be3-5e23-424d-935b-a23f10085dfe, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b0303be3-5e23-424d-935b-a23f10085dfe, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:19.597+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b0303be3-5e23-424d-935b-a23f10085dfe' of type subvolume Nov 23 05:06:19 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b0303be3-5e23-424d-935b-a23f10085dfe' of type subvolume Nov 23 05:06:19 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b0303be3-5e23-424d-935b-a23f10085dfe", "force": true, "format": "json"}]: dispatch Nov 23 05:06:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b0303be3-5e23-424d-935b-a23f10085dfe, vol_name:cephfs) < "" Nov 23 05:06:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b0303be3-5e23-424d-935b-a23f10085dfe'' moved to trashcan Nov 23 05:06:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:06:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b0303be3-5e23-424d-935b-a23f10085dfe, vol_name:cephfs) < "" Nov 23 05:06:19 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e214 do_prune osdmap full prune enabled Nov 23 05:06:19 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e215 e215: 6 total, 6 up, 6 in Nov 23 05:06:19 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e215: 6 total, 6 up, 6 in Nov 23 05:06:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e215 do_prune osdmap full prune enabled Nov 23 05:06:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e216 e216: 6 total, 6 up, 6 in Nov 23 05:06:20 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e216: 6 total, 6 up, 6 in Nov 23 05:06:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v461: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 63 KiB/s wr, 57 op/s Nov 23 05:06:21 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b2bf9dde-7e11-4be3-ba03-328acaaba40c", "format": "json"}]: dispatch Nov 23 05:06:21 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b2bf9dde-7e11-4be3-ba03-328acaaba40c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:21 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b2bf9dde-7e11-4be3-ba03-328acaaba40c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:21 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b2bf9dde-7e11-4be3-ba03-328acaaba40c", "force": true, "format": "json"}]: dispatch Nov 23 05:06:21 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b2bf9dde-7e11-4be3-ba03-328acaaba40c, vol_name:cephfs) < "" Nov 23 05:06:21 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b2bf9dde-7e11-4be3-ba03-328acaaba40c'' moved to trashcan Nov 23 05:06:21 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:06:21 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b2bf9dde-7e11-4be3-ba03-328acaaba40c, vol_name:cephfs) < "" Nov 23 05:06:21 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:06:21 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2737219998' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:06:21 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:06:21 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2737219998' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:06:21 localhost nova_compute[280939]: 2025-11-23 10:06:21.863 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:22 localhost nova_compute[280939]: 2025-11-23 10:06:22.128 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:06:23 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a415431f-f058-44aa-a0b3-ca2c8202775f", "snap_name": "fa219110-ac84-4caf-a8ed-b4a2487e0ae3_1633366c-cbc2-4ad1-8aff-e07adea22fbc", "force": true, "format": "json"}]: dispatch Nov 23 05:06:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fa219110-ac84-4caf-a8ed-b4a2487e0ae3_1633366c-cbc2-4ad1-8aff-e07adea22fbc, sub_name:a415431f-f058-44aa-a0b3-ca2c8202775f, vol_name:cephfs) < "" Nov 23 05:06:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a415431f-f058-44aa-a0b3-ca2c8202775f/.meta.tmp' Nov 23 05:06:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a415431f-f058-44aa-a0b3-ca2c8202775f/.meta.tmp' to config b'/volumes/_nogroup/a415431f-f058-44aa-a0b3-ca2c8202775f/.meta' Nov 23 05:06:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fa219110-ac84-4caf-a8ed-b4a2487e0ae3_1633366c-cbc2-4ad1-8aff-e07adea22fbc, sub_name:a415431f-f058-44aa-a0b3-ca2c8202775f, vol_name:cephfs) < "" Nov 23 05:06:23 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a415431f-f058-44aa-a0b3-ca2c8202775f", "snap_name": "fa219110-ac84-4caf-a8ed-b4a2487e0ae3", "force": true, "format": "json"}]: dispatch Nov 23 05:06:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fa219110-ac84-4caf-a8ed-b4a2487e0ae3, sub_name:a415431f-f058-44aa-a0b3-ca2c8202775f, vol_name:cephfs) < "" Nov 23 05:06:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a415431f-f058-44aa-a0b3-ca2c8202775f/.meta.tmp' Nov 23 05:06:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a415431f-f058-44aa-a0b3-ca2c8202775f/.meta.tmp' to config b'/volumes/_nogroup/a415431f-f058-44aa-a0b3-ca2c8202775f/.meta' Nov 23 05:06:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fa219110-ac84-4caf-a8ed-b4a2487e0ae3, sub_name:a415431f-f058-44aa-a0b3-ca2c8202775f, vol_name:cephfs) < "" Nov 23 05:06:23 localhost nova_compute[280939]: 2025-11-23 10:06:23.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:06:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:06:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_10:06:23 Nov 23 05:06:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 05:06:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 05:06:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['manila_data', 'vms', '.mgr', 'volumes', 'manila_metadata', 'images', 'backups'] Nov 23 05:06:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 05:06:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:06:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:06:23 localhost nova_compute[280939]: 2025-11-23 10:06:23.374 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:06:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:06:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:06:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:06:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e216 do_prune osdmap full prune enabled Nov 23 05:06:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e217 e217: 6 total, 6 up, 6 in Nov 23 05:06:23 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e217: 6 total, 6 up, 6 in Nov 23 05:06:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v463: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 70 KiB/s wr, 74 op/s Nov 23 05:06:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 05:06:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:06:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 05:06:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:06:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Nov 23 05:06:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:06:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014858362995533222 of space, bias 1.0, pg target 0.29667198114414667 quantized to 32 (current 32) Nov 23 05:06:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:06:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 23 05:06:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:06:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.00016276041666666666 quantized to 32 (current 32) Nov 23 05:06:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:06:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.1810441094360693e-06 of space, bias 1.0, pg target 0.00043402777777777775 quantized to 32 (current 32) Nov 23 05:06:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:06:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00028244521217197095 of space, bias 4.0, pg target 0.22482638888888887 quantized to 16 (current 16) Nov 23 05:06:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 05:06:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:06:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 05:06:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:06:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:06:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:06:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:06:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:06:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:06:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:06:24 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d956f44c-8366-4a9f-aa0b-f72cc390985d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:06:24 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d956f44c-8366-4a9f-aa0b-f72cc390985d, vol_name:cephfs) < "" Nov 23 05:06:24 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d956f44c-8366-4a9f-aa0b-f72cc390985d/.meta.tmp' Nov 23 05:06:24 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d956f44c-8366-4a9f-aa0b-f72cc390985d/.meta.tmp' to config b'/volumes/_nogroup/d956f44c-8366-4a9f-aa0b-f72cc390985d/.meta' Nov 23 05:06:24 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d956f44c-8366-4a9f-aa0b-f72cc390985d, vol_name:cephfs) < "" Nov 23 05:06:24 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d956f44c-8366-4a9f-aa0b-f72cc390985d", "format": "json"}]: dispatch Nov 23 05:06:24 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d956f44c-8366-4a9f-aa0b-f72cc390985d, vol_name:cephfs) < "" Nov 23 05:06:24 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d956f44c-8366-4a9f-aa0b-f72cc390985d, vol_name:cephfs) < "" Nov 23 05:06:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:06:24 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:06:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e217 do_prune osdmap full prune enabled Nov 23 05:06:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e218 e218: 6 total, 6 up, 6 in Nov 23 05:06:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e218: 6 total, 6 up, 6 in Nov 23 05:06:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:06:24 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c801417a-e8bf-4ab6-8efa-c611d6d1d111", "snap_name": "b2c90996-bd7d-4844-bb02-0209ec15d89a_14ee2b3e-3caa-46fb-9254-578591f72a32", "force": true, "format": "json"}]: dispatch Nov 23 05:06:24 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b2c90996-bd7d-4844-bb02-0209ec15d89a_14ee2b3e-3caa-46fb-9254-578591f72a32, sub_name:c801417a-e8bf-4ab6-8efa-c611d6d1d111, vol_name:cephfs) < "" Nov 23 05:06:24 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111/.meta.tmp' Nov 23 05:06:24 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111/.meta.tmp' to config b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111/.meta' Nov 23 05:06:24 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b2c90996-bd7d-4844-bb02-0209ec15d89a_14ee2b3e-3caa-46fb-9254-578591f72a32, sub_name:c801417a-e8bf-4ab6-8efa-c611d6d1d111, vol_name:cephfs) < "" Nov 23 05:06:24 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c801417a-e8bf-4ab6-8efa-c611d6d1d111", "snap_name": "b2c90996-bd7d-4844-bb02-0209ec15d89a", "force": true, "format": "json"}]: dispatch Nov 23 05:06:24 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b2c90996-bd7d-4844-bb02-0209ec15d89a, sub_name:c801417a-e8bf-4ab6-8efa-c611d6d1d111, vol_name:cephfs) < "" Nov 23 05:06:24 localhost systemd[1]: tmp-crun.T6lwcO.mount: Deactivated successfully. Nov 23 05:06:24 localhost podman[324627]: 2025-11-23 10:06:24.907819017 +0000 UTC m=+0.097323449 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 23 05:06:24 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111/.meta.tmp' Nov 23 05:06:24 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111/.meta.tmp' to config b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111/.meta' Nov 23 05:06:24 localhost podman[324627]: 2025-11-23 10:06:24.917360101 +0000 UTC m=+0.106864543 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:06:24 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:06:24 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b2c90996-bd7d-4844-bb02-0209ec15d89a, sub_name:c801417a-e8bf-4ab6-8efa-c611d6d1d111, vol_name:cephfs) < "" Nov 23 05:06:25 localhost nova_compute[280939]: 2025-11-23 10:06:25.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:06:25 localhost nova_compute[280939]: 2025-11-23 10:06:25.151 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:06:25 localhost nova_compute[280939]: 2025-11-23 10:06:25.151 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:06:25 localhost nova_compute[280939]: 2025-11-23 10:06:25.151 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:06:25 localhost nova_compute[280939]: 2025-11-23 10:06:25.152 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 05:06:25 localhost nova_compute[280939]: 2025-11-23 10:06:25.152 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:06:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v465: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 57 KiB/s wr, 77 op/s Nov 23 05:06:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e218 do_prune osdmap full prune enabled Nov 23 05:06:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e219 e219: 6 total, 6 up, 6 in Nov 23 05:06:25 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e219: 6 total, 6 up, 6 in Nov 23 05:06:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:06:25 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3780591128' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:06:25 localhost nova_compute[280939]: 2025-11-23 10:06:25.552 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:06:25 localhost nova_compute[280939]: 2025-11-23 10:06:25.754 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 05:06:25 localhost nova_compute[280939]: 2025-11-23 10:06:25.755 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11455MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 05:06:25 localhost nova_compute[280939]: 2025-11-23 10:06:25.756 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:06:25 localhost nova_compute[280939]: 2025-11-23 10:06:25.756 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:06:25 localhost nova_compute[280939]: 2025-11-23 10:06:25.834 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 05:06:25 localhost nova_compute[280939]: 2025-11-23 10:06:25.834 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 05:06:25 localhost nova_compute[280939]: 2025-11-23 10:06:25.855 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:06:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:06:26 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2201396073' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:06:26 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a415431f-f058-44aa-a0b3-ca2c8202775f", "format": "json"}]: dispatch Nov 23 05:06:26 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a415431f-f058-44aa-a0b3-ca2c8202775f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:26 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a415431f-f058-44aa-a0b3-ca2c8202775f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:26 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:26.281+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a415431f-f058-44aa-a0b3-ca2c8202775f' of type subvolume Nov 23 05:06:26 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a415431f-f058-44aa-a0b3-ca2c8202775f' of type subvolume Nov 23 05:06:26 localhost nova_compute[280939]: 2025-11-23 10:06:26.282 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:06:26 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a415431f-f058-44aa-a0b3-ca2c8202775f", "force": true, "format": "json"}]: dispatch Nov 23 05:06:26 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a415431f-f058-44aa-a0b3-ca2c8202775f, vol_name:cephfs) < "" Nov 23 05:06:26 localhost nova_compute[280939]: 2025-11-23 10:06:26.292 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 05:06:26 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a415431f-f058-44aa-a0b3-ca2c8202775f'' moved to trashcan Nov 23 05:06:26 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:06:26 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a415431f-f058-44aa-a0b3-ca2c8202775f, vol_name:cephfs) < "" Nov 23 05:06:26 localhost nova_compute[280939]: 2025-11-23 10:06:26.312 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 05:06:26 localhost nova_compute[280939]: 2025-11-23 10:06:26.314 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 05:06:26 localhost nova_compute[280939]: 2025-11-23 10:06:26.315 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:06:26 localhost nova_compute[280939]: 2025-11-23 10:06:26.900 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v467: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 55 KiB/s wr, 74 op/s Nov 23 05:06:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e219 do_prune osdmap full prune enabled Nov 23 05:06:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e220 e220: 6 total, 6 up, 6 in Nov 23 05:06:27 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e220: 6 total, 6 up, 6 in Nov 23 05:06:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:06:28 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c801417a-e8bf-4ab6-8efa-c611d6d1d111", "format": "json"}]: dispatch Nov 23 05:06:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c801417a-e8bf-4ab6-8efa-c611d6d1d111, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c801417a-e8bf-4ab6-8efa-c611d6d1d111, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:28 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c801417a-e8bf-4ab6-8efa-c611d6d1d111' of type subvolume Nov 23 05:06:28 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:28.231+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c801417a-e8bf-4ab6-8efa-c611d6d1d111' of type subvolume Nov 23 05:06:28 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c801417a-e8bf-4ab6-8efa-c611d6d1d111", "force": true, "format": "json"}]: dispatch Nov 23 05:06:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c801417a-e8bf-4ab6-8efa-c611d6d1d111, vol_name:cephfs) < "" Nov 23 05:06:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c801417a-e8bf-4ab6-8efa-c611d6d1d111'' moved to trashcan Nov 23 05:06:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:06:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c801417a-e8bf-4ab6-8efa-c611d6d1d111, vol_name:cephfs) < "" Nov 23 05:06:28 localhost nova_compute[280939]: 2025-11-23 10:06:28.421 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:28 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d956f44c-8366-4a9f-aa0b-f72cc390985d", "format": "json"}]: dispatch Nov 23 05:06:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d956f44c-8366-4a9f-aa0b-f72cc390985d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d956f44c-8366-4a9f-aa0b-f72cc390985d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:28 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:28.454+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd956f44c-8366-4a9f-aa0b-f72cc390985d' of type subvolume Nov 23 05:06:28 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd956f44c-8366-4a9f-aa0b-f72cc390985d' of type subvolume Nov 23 05:06:28 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d956f44c-8366-4a9f-aa0b-f72cc390985d", "force": true, "format": "json"}]: dispatch Nov 23 05:06:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d956f44c-8366-4a9f-aa0b-f72cc390985d, vol_name:cephfs) < "" Nov 23 05:06:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d956f44c-8366-4a9f-aa0b-f72cc390985d'' moved to trashcan Nov 23 05:06:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:06:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d956f44c-8366-4a9f-aa0b-f72cc390985d, vol_name:cephfs) < "" Nov 23 05:06:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e220 do_prune osdmap full prune enabled Nov 23 05:06:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e221 e221: 6 total, 6 up, 6 in Nov 23 05:06:28 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e221: 6 total, 6 up, 6 in Nov 23 05:06:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v470: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 184 KiB/s rd, 93 KiB/s wr, 251 op/s Nov 23 05:06:29 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dfbe99c1-496c-4355-8746-6391072f4d2c", "format": "json"}]: dispatch Nov 23 05:06:29 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dfbe99c1-496c-4355-8746-6391072f4d2c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:29 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dfbe99c1-496c-4355-8746-6391072f4d2c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:29 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:29.625+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dfbe99c1-496c-4355-8746-6391072f4d2c' of type subvolume Nov 23 05:06:29 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dfbe99c1-496c-4355-8746-6391072f4d2c' of type subvolume Nov 23 05:06:29 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dfbe99c1-496c-4355-8746-6391072f4d2c", "force": true, "format": "json"}]: dispatch Nov 23 05:06:29 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dfbe99c1-496c-4355-8746-6391072f4d2c, vol_name:cephfs) < "" Nov 23 05:06:29 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dfbe99c1-496c-4355-8746-6391072f4d2c'' moved to trashcan Nov 23 05:06:29 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:06:29 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dfbe99c1-496c-4355-8746-6391072f4d2c, vol_name:cephfs) < "" Nov 23 05:06:29 localhost ovn_metadata_agent[159410]: 2025-11-23 10:06:29.796 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:06:29 localhost nova_compute[280939]: 2025-11-23 10:06:29.797 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:29 localhost ovn_metadata_agent[159410]: 2025-11-23 10:06:29.798 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 05:06:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e221 do_prune osdmap full prune enabled Nov 23 05:06:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e222 e222: 6 total, 6 up, 6 in Nov 23 05:06:30 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e222: 6 total, 6 up, 6 in Nov 23 05:06:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:06:31 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3055210354' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:06:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:06:31 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3055210354' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:06:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v472: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 155 KiB/s rd, 78 KiB/s wr, 211 op/s Nov 23 05:06:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e222 do_prune osdmap full prune enabled Nov 23 05:06:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e223 e223: 6 total, 6 up, 6 in Nov 23 05:06:31 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e223: 6 total, 6 up, 6 in Nov 23 05:06:31 localhost nova_compute[280939]: 2025-11-23 10:06:31.936 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:06:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:06:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:06:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:06:32.311 262301 INFO neutron.agent.linux.ip_lib [None req-042be880-1f00-424e-ba4f-c90f03e29642 - - - - - -] Device tap5483a1b9-b9 cannot be used as it has no MAC address#033[00m Nov 23 05:06:32 localhost nova_compute[280939]: 2025-11-23 10:06:32.331 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:32 localhost kernel: device tap5483a1b9-b9 entered promiscuous mode Nov 23 05:06:32 localhost NetworkManager[5966]: [1763892392.3409] manager: (tap5483a1b9-b9): new Generic device (/org/freedesktop/NetworkManager/Devices/56) Nov 23 05:06:32 localhost nova_compute[280939]: 2025-11-23 10:06:32.339 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:32 localhost ovn_controller[153771]: 2025-11-23T10:06:32Z|00336|binding|INFO|Claiming lport 5483a1b9-b964-4a16-9820-7f7e2cb77e9f for this chassis. Nov 23 05:06:32 localhost ovn_controller[153771]: 2025-11-23T10:06:32Z|00337|binding|INFO|5483a1b9-b964-4a16-9820-7f7e2cb77e9f: Claiming unknown Nov 23 05:06:32 localhost systemd-udevd[324735]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:06:32 localhost ovn_metadata_agent[159410]: 2025-11-23 10:06:32.351 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-ee8c6125-6531-42bd-a3ef-c67b4d4c5734', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee8c6125-6531-42bd-a3ef-c67b4d4c5734', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e23de8d423c34a75a0a458559bd01733', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=931134c3-6cf9-474c-aed0-451488feeec5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5483a1b9-b964-4a16-9820-7f7e2cb77e9f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:06:32 localhost ovn_metadata_agent[159410]: 2025-11-23 10:06:32.353 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 5483a1b9-b964-4a16-9820-7f7e2cb77e9f in datapath ee8c6125-6531-42bd-a3ef-c67b4d4c5734 bound to our chassis#033[00m Nov 23 05:06:32 localhost ovn_metadata_agent[159410]: 2025-11-23 10:06:32.355 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port aabdb518-8a7a-46ad-9018-a6075b4f56e1 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 05:06:32 localhost ovn_metadata_agent[159410]: 2025-11-23 10:06:32.355 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee8c6125-6531-42bd-a3ef-c67b4d4c5734, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:06:32 localhost ovn_metadata_agent[159410]: 2025-11-23 10:06:32.356 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[8d752a40-7afc-4df3-aaf4-d92690d93a14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:06:32 localhost podman[324692]: 2025-11-23 10:06:32.358799454 +0000 UTC m=+0.102455898 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:06:32 localhost journal[229336]: ethtool ioctl error on tap5483a1b9-b9: No such device Nov 23 05:06:32 localhost journal[229336]: ethtool ioctl error on tap5483a1b9-b9: No such device Nov 23 05:06:32 localhost ovn_controller[153771]: 2025-11-23T10:06:32Z|00338|binding|INFO|Setting lport 5483a1b9-b964-4a16-9820-7f7e2cb77e9f ovn-installed in OVS Nov 23 05:06:32 localhost ovn_controller[153771]: 2025-11-23T10:06:32Z|00339|binding|INFO|Setting lport 5483a1b9-b964-4a16-9820-7f7e2cb77e9f up in Southbound Nov 23 05:06:32 localhost nova_compute[280939]: 2025-11-23 10:06:32.381 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:32 localhost journal[229336]: ethtool ioctl error on tap5483a1b9-b9: No such device Nov 23 05:06:32 localhost journal[229336]: ethtool ioctl error on tap5483a1b9-b9: No such device Nov 23 05:06:32 localhost journal[229336]: ethtool ioctl error on tap5483a1b9-b9: No such device Nov 23 05:06:32 localhost journal[229336]: ethtool ioctl error on tap5483a1b9-b9: No such device Nov 23 05:06:32 localhost podman[324692]: 2025-11-23 10:06:32.399505037 +0000 UTC m=+0.143161481 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:06:32 localhost journal[229336]: ethtool ioctl error on tap5483a1b9-b9: No such device Nov 23 05:06:32 localhost journal[229336]: ethtool ioctl error on tap5483a1b9-b9: No such device Nov 23 05:06:32 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:06:32 localhost nova_compute[280939]: 2025-11-23 10:06:32.419 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:32 localhost systemd[1]: tmp-crun.amPb4W.mount: Deactivated successfully. Nov 23 05:06:32 localhost nova_compute[280939]: 2025-11-23 10:06:32.446 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:32 localhost podman[324691]: 2025-11-23 10:06:32.450514569 +0000 UTC m=+0.197738253 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:06:32 localhost podman[324691]: 2025-11-23 10:06:32.488486398 +0000 UTC m=+0.235710112 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0) Nov 23 05:06:32 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:06:32 localhost podman[324694]: 2025-11-23 10:06:32.508211886 +0000 UTC m=+0.248287019 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:06:32 localhost podman[324694]: 2025-11-23 10:06:32.579003127 +0000 UTC m=+0.319078250 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 05:06:32 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:06:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:06:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e223 do_prune osdmap full prune enabled Nov 23 05:06:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e224 e224: 6 total, 6 up, 6 in Nov 23 05:06:33 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e224: 6 total, 6 up, 6 in Nov 23 05:06:33 localhost podman[324831]: Nov 23 05:06:33 localhost podman[324831]: 2025-11-23 10:06:33.259338133 +0000 UTC m=+0.076580590 container create 84fe97034c36188310f69df82b83f299f1b8be67818c58492375b504b3d058d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee8c6125-6531-42bd-a3ef-c67b4d4c5734, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 05:06:33 localhost systemd[1]: Started libpod-conmon-84fe97034c36188310f69df82b83f299f1b8be67818c58492375b504b3d058d3.scope. Nov 23 05:06:33 localhost systemd[1]: Started libcrun container. Nov 23 05:06:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e3908ca6b6b6633e3467ead413b0f559b18fe526d84dea237f0bb2ea18b0538/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:06:33 localhost podman[324831]: 2025-11-23 10:06:33.331860277 +0000 UTC m=+0.149102774 container init 84fe97034c36188310f69df82b83f299f1b8be67818c58492375b504b3d058d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee8c6125-6531-42bd-a3ef-c67b4d4c5734, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:06:33 localhost podman[324831]: 2025-11-23 10:06:33.233036053 +0000 UTC m=+0.050278520 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:06:33 localhost podman[324831]: 2025-11-23 10:06:33.340046029 +0000 UTC m=+0.157288526 container start 84fe97034c36188310f69df82b83f299f1b8be67818c58492375b504b3d058d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee8c6125-6531-42bd-a3ef-c67b4d4c5734, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Nov 23 05:06:33 localhost dnsmasq[324849]: started, version 2.85 cachesize 150 Nov 23 05:06:33 localhost dnsmasq[324849]: DNS service limited to local subnets Nov 23 05:06:33 localhost dnsmasq[324849]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:06:33 localhost dnsmasq[324849]: warning: no upstream servers configured Nov 23 05:06:33 localhost dnsmasq-dhcp[324849]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:06:33 localhost dnsmasq[324849]: read /var/lib/neutron/dhcp/ee8c6125-6531-42bd-a3ef-c67b4d4c5734/addn_hosts - 0 addresses Nov 23 05:06:33 localhost dnsmasq-dhcp[324849]: read /var/lib/neutron/dhcp/ee8c6125-6531-42bd-a3ef-c67b4d4c5734/host Nov 23 05:06:33 localhost dnsmasq-dhcp[324849]: read /var/lib/neutron/dhcp/ee8c6125-6531-42bd-a3ef-c67b4d4c5734/opts Nov 23 05:06:33 localhost nova_compute[280939]: 2025-11-23 10:06:33.443 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:33 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:06:33.490 262301 INFO neutron.agent.dhcp.agent [None req-8af96bea-3b07-4865-b0f5-4bdb55253500 - - - - - -] DHCP configuration for ports {'284ed6c7-2070-4c7d-8662-52529746f5cf'} is completed#033[00m Nov 23 05:06:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v475: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 188 KiB/s rd, 95 KiB/s wr, 256 op/s Nov 23 05:06:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e224 do_prune osdmap full prune enabled Nov 23 05:06:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e225 e225: 6 total, 6 up, 6 in Nov 23 05:06:34 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e225: 6 total, 6 up, 6 in Nov 23 05:06:34 localhost ovn_metadata_agent[159410]: 2025-11-23 10:06:34.800 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 05:06:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e225 do_prune osdmap full prune enabled Nov 23 05:06:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e226 e226: 6 total, 6 up, 6 in Nov 23 05:06:35 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e226: 6 total, 6 up, 6 in Nov 23 05:06:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v478: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 148 KiB/s rd, 33 KiB/s wr, 202 op/s Nov 23 05:06:35 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:06:35.769 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:06:35Z, description=, device_id=330e0a53-cd4a-4c6e-a9ea-859155295b59, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=351e87b9-337d-4c1c-a976-602eb9c3e8a4, ip_allocation=immediate, mac_address=fa:16:3e:23:da:55, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:06:29Z, description=, dns_domain=, id=ee8c6125-6531-42bd-a3ef-c67b4d4c5734, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingNegativeTest-815466115-network, port_security_enabled=True, project_id=e23de8d423c34a75a0a458559bd01733, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30517, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3208, status=ACTIVE, subnets=['b1a2f32d-d008-4f90-986e-f5d6ccdd3e0f'], tags=[], tenant_id=e23de8d423c34a75a0a458559bd01733, updated_at=2025-11-23T10:06:30Z, vlan_transparent=None, network_id=ee8c6125-6531-42bd-a3ef-c67b4d4c5734, port_security_enabled=False, project_id=e23de8d423c34a75a0a458559bd01733, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3243, status=DOWN, tags=[], tenant_id=e23de8d423c34a75a0a458559bd01733, updated_at=2025-11-23T10:06:35Z on network ee8c6125-6531-42bd-a3ef-c67b4d4c5734#033[00m Nov 23 05:06:35 localhost dnsmasq[324849]: read /var/lib/neutron/dhcp/ee8c6125-6531-42bd-a3ef-c67b4d4c5734/addn_hosts - 1 addresses Nov 23 05:06:35 localhost podman[324867]: 2025-11-23 10:06:35.990018488 +0000 UTC m=+0.061475254 container kill 84fe97034c36188310f69df82b83f299f1b8be67818c58492375b504b3d058d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee8c6125-6531-42bd-a3ef-c67b4d4c5734, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 23 05:06:35 localhost dnsmasq-dhcp[324849]: read /var/lib/neutron/dhcp/ee8c6125-6531-42bd-a3ef-c67b4d4c5734/host Nov 23 05:06:35 localhost dnsmasq-dhcp[324849]: read /var/lib/neutron/dhcp/ee8c6125-6531-42bd-a3ef-c67b4d4c5734/opts Nov 23 05:06:36 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:06:36.195 262301 INFO neutron.agent.dhcp.agent [None req-d3e2e995-96d7-4cd8-8f66-5e1c76e38369 - - - - - -] DHCP configuration for ports {'351e87b9-337d-4c1c-a976-602eb9c3e8a4'} is completed#033[00m Nov 23 05:06:36 localhost openstack_network_exporter[241732]: ERROR 10:06:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:06:36 localhost openstack_network_exporter[241732]: ERROR 10:06:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:06:36 localhost openstack_network_exporter[241732]: Nov 23 05:06:36 localhost openstack_network_exporter[241732]: ERROR 10:06:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:06:36 localhost openstack_network_exporter[241732]: ERROR 10:06:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:06:36 localhost openstack_network_exporter[241732]: ERROR 10:06:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:06:36 localhost openstack_network_exporter[241732]: Nov 23 05:06:36 localhost nova_compute[280939]: 2025-11-23 10:06:36.937 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:06:37.067 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:06:35Z, description=, device_id=330e0a53-cd4a-4c6e-a9ea-859155295b59, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=351e87b9-337d-4c1c-a976-602eb9c3e8a4, ip_allocation=immediate, mac_address=fa:16:3e:23:da:55, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:06:29Z, description=, dns_domain=, id=ee8c6125-6531-42bd-a3ef-c67b4d4c5734, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingNegativeTest-815466115-network, port_security_enabled=True, project_id=e23de8d423c34a75a0a458559bd01733, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30517, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3208, status=ACTIVE, subnets=['b1a2f32d-d008-4f90-986e-f5d6ccdd3e0f'], tags=[], tenant_id=e23de8d423c34a75a0a458559bd01733, updated_at=2025-11-23T10:06:30Z, vlan_transparent=None, network_id=ee8c6125-6531-42bd-a3ef-c67b4d4c5734, port_security_enabled=False, project_id=e23de8d423c34a75a0a458559bd01733, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3243, status=DOWN, tags=[], tenant_id=e23de8d423c34a75a0a458559bd01733, updated_at=2025-11-23T10:06:35Z on network ee8c6125-6531-42bd-a3ef-c67b4d4c5734#033[00m Nov 23 05:06:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e226 do_prune osdmap full prune enabled Nov 23 05:06:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e227 e227: 6 total, 6 up, 6 in Nov 23 05:06:37 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e227: 6 total, 6 up, 6 in Nov 23 05:06:37 localhost dnsmasq[324849]: read /var/lib/neutron/dhcp/ee8c6125-6531-42bd-a3ef-c67b4d4c5734/addn_hosts - 1 addresses Nov 23 05:06:37 localhost dnsmasq-dhcp[324849]: read /var/lib/neutron/dhcp/ee8c6125-6531-42bd-a3ef-c67b4d4c5734/host Nov 23 05:06:37 localhost podman[324905]: 2025-11-23 10:06:37.306283424 +0000 UTC m=+0.059173193 container kill 84fe97034c36188310f69df82b83f299f1b8be67818c58492375b504b3d058d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee8c6125-6531-42bd-a3ef-c67b4d4c5734, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 23 05:06:37 localhost dnsmasq-dhcp[324849]: read /var/lib/neutron/dhcp/ee8c6125-6531-42bd-a3ef-c67b4d4c5734/opts Nov 23 05:06:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v480: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 136 KiB/s rd, 30 KiB/s wr, 185 op/s Nov 23 05:06:37 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:06:37.569 262301 INFO neutron.agent.dhcp.agent [None req-488bfff2-7a2a-44a5-a850-ce3df8d159a2 - - - - - -] DHCP configuration for ports {'351e87b9-337d-4c1c-a976-602eb9c3e8a4'} is completed#033[00m Nov 23 05:06:37 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3c893617-cb25-4de9-8f72-b90c923bf86c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:06:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3c893617-cb25-4de9-8f72-b90c923bf86c, vol_name:cephfs) < "" Nov 23 05:06:37 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3c893617-cb25-4de9-8f72-b90c923bf86c/.meta.tmp' Nov 23 05:06:37 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3c893617-cb25-4de9-8f72-b90c923bf86c/.meta.tmp' to config b'/volumes/_nogroup/3c893617-cb25-4de9-8f72-b90c923bf86c/.meta' Nov 23 05:06:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3c893617-cb25-4de9-8f72-b90c923bf86c, vol_name:cephfs) < "" Nov 23 05:06:37 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3c893617-cb25-4de9-8f72-b90c923bf86c", "format": "json"}]: dispatch Nov 23 05:06:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3c893617-cb25-4de9-8f72-b90c923bf86c, vol_name:cephfs) < "" Nov 23 05:06:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3c893617-cb25-4de9-8f72-b90c923bf86c, vol_name:cephfs) < "" Nov 23 05:06:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:06:37 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:06:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:06:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e227 do_prune osdmap full prune enabled Nov 23 05:06:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e228 e228: 6 total, 6 up, 6 in Nov 23 05:06:38 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e228: 6 total, 6 up, 6 in Nov 23 05:06:38 localhost nova_compute[280939]: 2025-11-23 10:06:38.472 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:38 localhost ovn_controller[153771]: 2025-11-23T10:06:38Z|00340|ovn_bfd|INFO|Enabled BFD on interface ovn-c237ed-0 Nov 23 05:06:38 localhost ovn_controller[153771]: 2025-11-23T10:06:38Z|00341|ovn_bfd|INFO|Enabled BFD on interface ovn-49b8a0-0 Nov 23 05:06:38 localhost ovn_controller[153771]: 2025-11-23T10:06:38Z|00342|ovn_bfd|INFO|Enabled BFD on interface ovn-b7d5b3-0 Nov 23 05:06:38 localhost nova_compute[280939]: 2025-11-23 10:06:38.678 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:38 localhost nova_compute[280939]: 2025-11-23 10:06:38.695 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:38 localhost nova_compute[280939]: 2025-11-23 10:06:38.702 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:38 localhost nova_compute[280939]: 2025-11-23 10:06:38.715 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:38 localhost nova_compute[280939]: 2025-11-23 10:06:38.719 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:38 localhost nova_compute[280939]: 2025-11-23 10:06:38.786 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:06:38 localhost systemd[1]: tmp-crun.50R0DL.mount: Deactivated successfully. Nov 23 05:06:38 localhost podman[324928]: 2025-11-23 10:06:38.912021017 +0000 UTC m=+0.094523863 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, distribution-scope=public, container_name=openstack_network_exporter, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_id=edpm, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 05:06:38 localhost podman[324928]: 2025-11-23 10:06:38.9247663 +0000 UTC m=+0.107269146 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, version=9.6, managed_by=edpm_ansible) Nov 23 05:06:38 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:06:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e228 do_prune osdmap full prune enabled Nov 23 05:06:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e229 e229: 6 total, 6 up, 6 in Nov 23 05:06:39 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e229: 6 total, 6 up, 6 in Nov 23 05:06:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v483: 177 pgs: 3 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 172 active+clean; 237 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 88 KiB/s rd, 9.3 MiB/s wr, 128 op/s Nov 23 05:06:39 localhost nova_compute[280939]: 2025-11-23 10:06:39.691 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e229 do_prune osdmap full prune enabled Nov 23 05:06:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e230 e230: 6 total, 6 up, 6 in Nov 23 05:06:40 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e230: 6 total, 6 up, 6 in Nov 23 05:06:40 localhost nova_compute[280939]: 2025-11-23 10:06:40.393 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:06:40 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3196558149' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:06:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:06:40 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3196558149' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:06:40 localhost nova_compute[280939]: 2025-11-23 10:06:40.607 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v485: 177 pgs: 3 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 172 active+clean; 237 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 90 KiB/s rd, 9.5 MiB/s wr, 131 op/s Nov 23 05:06:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:06:41 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/395089354' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:06:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:06:41 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/395089354' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:06:41 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8d530d74-a4c8-47dd-aed4-d2152c10ac50", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:06:41 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:8d530d74-a4c8-47dd-aed4-d2152c10ac50, vol_name:cephfs) < "" Nov 23 05:06:41 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8d530d74-a4c8-47dd-aed4-d2152c10ac50/.meta.tmp' Nov 23 05:06:41 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8d530d74-a4c8-47dd-aed4-d2152c10ac50/.meta.tmp' to config b'/volumes/_nogroup/8d530d74-a4c8-47dd-aed4-d2152c10ac50/.meta' Nov 23 05:06:41 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:8d530d74-a4c8-47dd-aed4-d2152c10ac50, vol_name:cephfs) < "" Nov 23 05:06:41 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8d530d74-a4c8-47dd-aed4-d2152c10ac50", "format": "json"}]: dispatch Nov 23 05:06:41 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8d530d74-a4c8-47dd-aed4-d2152c10ac50, vol_name:cephfs) < "" Nov 23 05:06:41 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8d530d74-a4c8-47dd-aed4-d2152c10ac50, vol_name:cephfs) < "" Nov 23 05:06:41 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:06:41 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:06:41 localhost nova_compute[280939]: 2025-11-23 10:06:41.982 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:42 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3c893617-cb25-4de9-8f72-b90c923bf86c", "format": "json"}]: dispatch Nov 23 05:06:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3c893617-cb25-4de9-8f72-b90c923bf86c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3c893617-cb25-4de9-8f72-b90c923bf86c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:42 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:42.059+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3c893617-cb25-4de9-8f72-b90c923bf86c' of type subvolume Nov 23 05:06:42 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3c893617-cb25-4de9-8f72-b90c923bf86c' of type subvolume Nov 23 05:06:42 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3c893617-cb25-4de9-8f72-b90c923bf86c", "force": true, "format": "json"}]: dispatch Nov 23 05:06:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3c893617-cb25-4de9-8f72-b90c923bf86c, vol_name:cephfs) < "" Nov 23 05:06:42 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3c893617-cb25-4de9-8f72-b90c923bf86c'' moved to trashcan Nov 23 05:06:42 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:06:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3c893617-cb25-4de9-8f72-b90c923bf86c, vol_name:cephfs) < "" Nov 23 05:06:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:06:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e230 do_prune osdmap full prune enabled Nov 23 05:06:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e231 e231: 6 total, 6 up, 6 in Nov 23 05:06:43 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e231: 6 total, 6 up, 6 in Nov 23 05:06:43 localhost nova_compute[280939]: 2025-11-23 10:06:43.506 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v487: 177 pgs: 3 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 172 active+clean; 237 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 71 KiB/s rd, 7.5 MiB/s wr, 103 op/s Nov 23 05:06:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:06:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:06:43 localhost podman[324950]: 2025-11-23 10:06:43.896058453 +0000 UTC m=+0.070641487 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 05:06:43 localhost podman[324950]: 2025-11-23 10:06:43.904465192 +0000 UTC m=+0.079048266 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:06:43 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:06:43 localhost podman[324949]: 2025-11-23 10:06:43.956171015 +0000 UTC m=+0.130598394 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:06:43 localhost podman[324949]: 2025-11-23 10:06:43.968294508 +0000 UTC m=+0.142721877 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:06:43 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:06:45 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8d530d74-a4c8-47dd-aed4-d2152c10ac50", "snap_name": "85fdd710-c011-4c18-ae03-4b7d2fcb945e", "format": "json"}]: dispatch Nov 23 05:06:45 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:85fdd710-c011-4c18-ae03-4b7d2fcb945e, sub_name:8d530d74-a4c8-47dd-aed4-d2152c10ac50, vol_name:cephfs) < "" Nov 23 05:06:45 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:85fdd710-c011-4c18-ae03-4b7d2fcb945e, sub_name:8d530d74-a4c8-47dd-aed4-d2152c10ac50, vol_name:cephfs) < "" Nov 23 05:06:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v488: 177 pgs: 177 active+clean; 365 MiB data, 1.5 GiB used, 41 GiB / 42 GiB avail; 152 KiB/s rd, 27 MiB/s wr, 216 op/s Nov 23 05:06:45 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0) Nov 23 05:06:45 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0) Nov 23 05:06:45 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0) Nov 23 05:06:45 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:06:45 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0) Nov 23 05:06:45 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:06:45 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0) Nov 23 05:06:45 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:06:45 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0) Nov 23 05:06:45 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:06:45 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:06:45 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:06:46 localhost ovn_controller[153771]: 2025-11-23T10:06:46Z|00343|ovn_bfd|INFO|Disabled BFD on interface ovn-c237ed-0 Nov 23 05:06:46 localhost ovn_controller[153771]: 2025-11-23T10:06:46Z|00344|ovn_bfd|INFO|Disabled BFD on interface ovn-49b8a0-0 Nov 23 05:06:46 localhost ovn_controller[153771]: 2025-11-23T10:06:46Z|00345|ovn_bfd|INFO|Disabled BFD on interface ovn-b7d5b3-0 Nov 23 05:06:46 localhost nova_compute[280939]: 2025-11-23 10:06:46.410 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:46 localhost nova_compute[280939]: 2025-11-23 10:06:46.432 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:46 localhost nova_compute[280939]: 2025-11-23 10:06:46.435 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:46 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:06:46 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:06:46 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:06:46 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:06:46 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:06:46 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:06:46 localhost systemd[1]: tmp-crun.TS06xJ.mount: Deactivated successfully. Nov 23 05:06:46 localhost dnsmasq[324849]: read /var/lib/neutron/dhcp/ee8c6125-6531-42bd-a3ef-c67b4d4c5734/addn_hosts - 0 addresses Nov 23 05:06:46 localhost dnsmasq-dhcp[324849]: read /var/lib/neutron/dhcp/ee8c6125-6531-42bd-a3ef-c67b4d4c5734/host Nov 23 05:06:46 localhost podman[325115]: 2025-11-23 10:06:46.522586319 +0000 UTC m=+0.065963653 container kill 84fe97034c36188310f69df82b83f299f1b8be67818c58492375b504b3d058d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee8c6125-6531-42bd-a3ef-c67b4d4c5734, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:06:46 localhost dnsmasq-dhcp[324849]: read /var/lib/neutron/dhcp/ee8c6125-6531-42bd-a3ef-c67b4d4c5734/opts Nov 23 05:06:46 localhost nova_compute[280939]: 2025-11-23 10:06:46.684 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:46 localhost kernel: device tap5483a1b9-b9 left promiscuous mode Nov 23 05:06:46 localhost ovn_controller[153771]: 2025-11-23T10:06:46Z|00346|binding|INFO|Releasing lport 5483a1b9-b964-4a16-9820-7f7e2cb77e9f from this chassis (sb_readonly=0) Nov 23 05:06:46 localhost ovn_controller[153771]: 2025-11-23T10:06:46Z|00347|binding|INFO|Setting lport 5483a1b9-b964-4a16-9820-7f7e2cb77e9f down in Southbound Nov 23 05:06:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:06:46.702 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-ee8c6125-6531-42bd-a3ef-c67b4d4c5734', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee8c6125-6531-42bd-a3ef-c67b4d4c5734', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e23de8d423c34a75a0a458559bd01733', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=931134c3-6cf9-474c-aed0-451488feeec5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5483a1b9-b964-4a16-9820-7f7e2cb77e9f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:06:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:06:46.704 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 5483a1b9-b964-4a16-9820-7f7e2cb77e9f in datapath ee8c6125-6531-42bd-a3ef-c67b4d4c5734 unbound from our chassis#033[00m Nov 23 05:06:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:06:46.707 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee8c6125-6531-42bd-a3ef-c67b4d4c5734, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:06:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:06:46.708 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[67aa7224-2892-4f33-9034-cf234e5b1220]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:06:46 localhost nova_compute[280939]: 2025-11-23 10:06:46.710 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 05:06:46 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 05:06:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 05:06:46 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:06:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 05:06:46 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:06:46 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev 94487481-388f-4693-ab4b-0d7c597456a3 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:06:46 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev 94487481-388f-4693-ab4b-0d7c597456a3 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:06:46 localhost ceph-mgr[286671]: [progress INFO root] Completed event 94487481-388f-4693-ab4b-0d7c597456a3 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 05:06:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 05:06:46 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 05:06:46 localhost nova_compute[280939]: 2025-11-23 10:06:46.983 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:47 localhost podman[239764]: time="2025-11-23T10:06:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:06:47 localhost podman[239764]: @ - - [23/Nov/2025:10:06:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156323 "" "Go-http-client/1.1" Nov 23 05:06:47 localhost podman[239764]: @ - - [23/Nov/2025:10:06:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19217 "" "Go-http-client/1.1" Nov 23 05:06:47 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:06:47 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:06:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v489: 177 pgs: 177 active+clean; 365 MiB data, 1.5 GiB used, 41 GiB / 42 GiB avail; 70 KiB/s rd, 16 MiB/s wr, 98 op/s Nov 23 05:06:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:06:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e231 do_prune osdmap full prune enabled Nov 23 05:06:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e232 e232: 6 total, 6 up, 6 in Nov 23 05:06:48 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e232: 6 total, 6 up, 6 in Nov 23 05:06:48 localhost nova_compute[280939]: 2025-11-23 10:06:48.536 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:48 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 05:06:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 05:06:48 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8d530d74-a4c8-47dd-aed4-d2152c10ac50", "snap_name": "85fdd710-c011-4c18-ae03-4b7d2fcb945e_dda399c2-11f7-4703-ab1c-2bfe2cc11a4a", "force": true, "format": "json"}]: dispatch Nov 23 05:06:48 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:85fdd710-c011-4c18-ae03-4b7d2fcb945e_dda399c2-11f7-4703-ab1c-2bfe2cc11a4a, sub_name:8d530d74-a4c8-47dd-aed4-d2152c10ac50, vol_name:cephfs) < "" Nov 23 05:06:48 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:06:48 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8d530d74-a4c8-47dd-aed4-d2152c10ac50/.meta.tmp' Nov 23 05:06:48 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8d530d74-a4c8-47dd-aed4-d2152c10ac50/.meta.tmp' to config b'/volumes/_nogroup/8d530d74-a4c8-47dd-aed4-d2152c10ac50/.meta' Nov 23 05:06:48 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:85fdd710-c011-4c18-ae03-4b7d2fcb945e_dda399c2-11f7-4703-ab1c-2bfe2cc11a4a, sub_name:8d530d74-a4c8-47dd-aed4-d2152c10ac50, vol_name:cephfs) < "" Nov 23 05:06:48 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8d530d74-a4c8-47dd-aed4-d2152c10ac50", "snap_name": "85fdd710-c011-4c18-ae03-4b7d2fcb945e", "force": true, "format": "json"}]: dispatch Nov 23 05:06:48 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:85fdd710-c011-4c18-ae03-4b7d2fcb945e, sub_name:8d530d74-a4c8-47dd-aed4-d2152c10ac50, vol_name:cephfs) < "" Nov 23 05:06:48 localhost dnsmasq[324849]: exiting on receipt of SIGTERM Nov 23 05:06:48 localhost podman[325189]: 2025-11-23 10:06:48.635745001 +0000 UTC m=+0.062501776 container kill 84fe97034c36188310f69df82b83f299f1b8be67818c58492375b504b3d058d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee8c6125-6531-42bd-a3ef-c67b4d4c5734, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 23 05:06:48 localhost systemd[1]: libpod-84fe97034c36188310f69df82b83f299f1b8be67818c58492375b504b3d058d3.scope: Deactivated successfully. Nov 23 05:06:48 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8d530d74-a4c8-47dd-aed4-d2152c10ac50/.meta.tmp' Nov 23 05:06:48 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8d530d74-a4c8-47dd-aed4-d2152c10ac50/.meta.tmp' to config b'/volumes/_nogroup/8d530d74-a4c8-47dd-aed4-d2152c10ac50/.meta' Nov 23 05:06:48 localhost podman[325205]: 2025-11-23 10:06:48.712098844 +0000 UTC m=+0.054553272 container died 84fe97034c36188310f69df82b83f299f1b8be67818c58492375b504b3d058d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee8c6125-6531-42bd-a3ef-c67b4d4c5734, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 23 05:06:48 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:85fdd710-c011-4c18-ae03-4b7d2fcb945e, sub_name:8d530d74-a4c8-47dd-aed4-d2152c10ac50, vol_name:cephfs) < "" Nov 23 05:06:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-84fe97034c36188310f69df82b83f299f1b8be67818c58492375b504b3d058d3-userdata-shm.mount: Deactivated successfully. Nov 23 05:06:48 localhost systemd[1]: var-lib-containers-storage-overlay-1e3908ca6b6b6633e3467ead413b0f559b18fe526d84dea237f0bb2ea18b0538-merged.mount: Deactivated successfully. Nov 23 05:06:48 localhost podman[325205]: 2025-11-23 10:06:48.761855106 +0000 UTC m=+0.104309514 container remove 84fe97034c36188310f69df82b83f299f1b8be67818c58492375b504b3d058d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee8c6125-6531-42bd-a3ef-c67b4d4c5734, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 05:06:48 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:06:48.787 262301 INFO neutron.agent.dhcp.agent [None req-6d2f0092-6aee-443f-b56b-c2c10944da91 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:06:48 localhost systemd[1]: libpod-conmon-84fe97034c36188310f69df82b83f299f1b8be67818c58492375b504b3d058d3.scope: Deactivated successfully. Nov 23 05:06:48 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:06:48.878 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:06:49 localhost nova_compute[280939]: 2025-11-23 10:06:49.069 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:49 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:06:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:06:49 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4230430634' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:06:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v491: 177 pgs: 177 active+clean; 493 MiB data, 1.8 GiB used, 40 GiB / 42 GiB avail; 83 KiB/s rd, 32 MiB/s wr, 123 op/s Nov 23 05:06:49 localhost systemd[1]: run-netns-qdhcp\x2dee8c6125\x2d6531\x2d42bd\x2da3ef\x2dc67b4d4c5734.mount: Deactivated successfully. Nov 23 05:06:49 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:06:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:06:49 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/.meta.tmp' Nov 23 05:06:49 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/.meta.tmp' to config b'/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/.meta' Nov 23 05:06:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:06:49 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "format": "json"}]: dispatch Nov 23 05:06:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:06:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:06:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:06:49 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:06:50 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e232 do_prune osdmap full prune enabled Nov 23 05:06:50 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e233 e233: 6 total, 6 up, 6 in Nov 23 05:06:50 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e233: 6 total, 6 up, 6 in Nov 23 05:06:51 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e233 do_prune osdmap full prune enabled Nov 23 05:06:51 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e234 e234: 6 total, 6 up, 6 in Nov 23 05:06:51 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e234: 6 total, 6 up, 6 in Nov 23 05:06:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v494: 177 pgs: 177 active+clean; 493 MiB data, 1.8 GiB used, 40 GiB / 42 GiB avail; 18 KiB/s rd, 21 MiB/s wr, 34 op/s Nov 23 05:06:51 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8d530d74-a4c8-47dd-aed4-d2152c10ac50", "format": "json"}]: dispatch Nov 23 05:06:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8d530d74-a4c8-47dd-aed4-d2152c10ac50, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8d530d74-a4c8-47dd-aed4-d2152c10ac50, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:06:51 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:06:51.843+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8d530d74-a4c8-47dd-aed4-d2152c10ac50' of type subvolume Nov 23 05:06:51 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8d530d74-a4c8-47dd-aed4-d2152c10ac50' of type subvolume Nov 23 05:06:51 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8d530d74-a4c8-47dd-aed4-d2152c10ac50", "force": true, "format": "json"}]: dispatch Nov 23 05:06:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8d530d74-a4c8-47dd-aed4-d2152c10ac50, vol_name:cephfs) < "" Nov 23 05:06:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8d530d74-a4c8-47dd-aed4-d2152c10ac50'' moved to trashcan Nov 23 05:06:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:06:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8d530d74-a4c8-47dd-aed4-d2152c10ac50, vol_name:cephfs) < "" Nov 23 05:06:51 localhost nova_compute[280939]: 2025-11-23 10:06:51.985 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:53 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:06:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:06:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 23 05:06:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:06:53 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:06:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:06:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:06:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:06:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:06:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:06:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e234 do_prune osdmap full prune enabled Nov 23 05:06:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e235 e235: 6 total, 6 up, 6 in Nov 23 05:06:53 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e235: 6 total, 6 up, 6 in Nov 23 05:06:53 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:06:53 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:06:53 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:06:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:06:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:06:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:06:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:06:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:06:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:06:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v496: 177 pgs: 177 active+clean; 493 MiB data, 1.8 GiB used, 40 GiB / 42 GiB avail; 20 KiB/s rd, 24 MiB/s wr, 38 op/s Nov 23 05:06:53 localhost nova_compute[280939]: 2025-11-23 10:06:53.568 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e235 do_prune osdmap full prune enabled Nov 23 05:06:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e236 e236: 6 total, 6 up, 6 in Nov 23 05:06:54 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e236: 6 total, 6 up, 6 in Nov 23 05:06:55 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "87b8129a-7de1-44bc-958e-150b793b403a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:06:55 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:87b8129a-7de1-44bc-958e-150b793b403a, vol_name:cephfs) < "" Nov 23 05:06:55 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/87b8129a-7de1-44bc-958e-150b793b403a/.meta.tmp' Nov 23 05:06:55 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/87b8129a-7de1-44bc-958e-150b793b403a/.meta.tmp' to config b'/volumes/_nogroup/87b8129a-7de1-44bc-958e-150b793b403a/.meta' Nov 23 05:06:55 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:87b8129a-7de1-44bc-958e-150b793b403a, vol_name:cephfs) < "" Nov 23 05:06:55 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "87b8129a-7de1-44bc-958e-150b793b403a", "format": "json"}]: dispatch Nov 23 05:06:55 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:87b8129a-7de1-44bc-958e-150b793b403a, vol_name:cephfs) < "" Nov 23 05:06:55 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:87b8129a-7de1-44bc-958e-150b793b403a, vol_name:cephfs) < "" Nov 23 05:06:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:06:55 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:06:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e236 do_prune osdmap full prune enabled Nov 23 05:06:55 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e237 e237: 6 total, 6 up, 6 in Nov 23 05:06:55 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e237: 6 total, 6 up, 6 in Nov 23 05:06:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v499: 177 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 174 active+clean; 621 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 119 KiB/s rd, 30 MiB/s wr, 178 op/s Nov 23 05:06:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:06:55 localhost podman[325228]: 2025-11-23 10:06:55.895420117 +0000 UTC m=+0.080730758 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Nov 23 05:06:55 localhost podman[325228]: 2025-11-23 10:06:55.930494177 +0000 UTC m=+0.115804828 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:06:55 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:06:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:06:56 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/926987801' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:06:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:06:56 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/926987801' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:06:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:06:56 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1384345740' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:06:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:06:56 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1384345740' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:06:56 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "format": "json"}]: dispatch Nov 23 05:06:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:06:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 23 05:06:56 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:06:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Nov 23 05:06:56 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 23 05:06:56 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 23 05:06:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:06:56 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "format": "json"}]: dispatch Nov 23 05:06:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:06:56 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:06:56 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:06:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:06:56 localhost nova_compute[280939]: 2025-11-23 10:06:56.986 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:57 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:06:57 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 23 05:06:57 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 23 05:06:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v500: 177 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 174 active+clean; 621 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 85 KiB/s rd, 21 MiB/s wr, 127 op/s Nov 23 05:06:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:06:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e237 do_prune osdmap full prune enabled Nov 23 05:06:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e238 e238: 6 total, 6 up, 6 in Nov 23 05:06:58 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e238: 6 total, 6 up, 6 in Nov 23 05:06:58 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "87b8129a-7de1-44bc-958e-150b793b403a", "snap_name": "9a00f796-c1f9-4801-901b-d1066a695eff", "format": "json"}]: dispatch Nov 23 05:06:58 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9a00f796-c1f9-4801-901b-d1066a695eff, sub_name:87b8129a-7de1-44bc-958e-150b793b403a, vol_name:cephfs) < "" Nov 23 05:06:58 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9a00f796-c1f9-4801-901b-d1066a695eff, sub_name:87b8129a-7de1-44bc-958e-150b793b403a, vol_name:cephfs) < "" Nov 23 05:06:58 localhost nova_compute[280939]: 2025-11-23 10:06:58.591 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:06:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v502: 177 pgs: 177 active+clean; 742 MiB data, 2.6 GiB used, 39 GiB / 42 GiB avail; 151 KiB/s rd, 41 MiB/s wr, 228 op/s Nov 23 05:06:59 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "r", "format": "json"}]: dispatch Nov 23 05:06:59 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:06:59 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 23 05:06:59 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:07:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:07:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:07:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:07:00 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/451286883' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:07:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:07:00 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/451286883' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:07:00 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:07:00 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:00 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v503: 177 pgs: 177 active+clean; 742 MiB data, 2.6 GiB used, 39 GiB / 42 GiB avail; 103 KiB/s rd, 30 MiB/s wr, 161 op/s Nov 23 05:07:01 localhost nova_compute[280939]: 2025-11-23 10:07:01.987 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:02 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "87b8129a-7de1-44bc-958e-150b793b403a", "snap_name": "9a00f796-c1f9-4801-901b-d1066a695eff_fbd2935e-439f-4401-9aa6-78d8ed8fda87", "force": true, "format": "json"}]: dispatch Nov 23 05:07:02 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9a00f796-c1f9-4801-901b-d1066a695eff_fbd2935e-439f-4401-9aa6-78d8ed8fda87, sub_name:87b8129a-7de1-44bc-958e-150b793b403a, vol_name:cephfs) < "" Nov 23 05:07:02 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/87b8129a-7de1-44bc-958e-150b793b403a/.meta.tmp' Nov 23 05:07:02 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/87b8129a-7de1-44bc-958e-150b793b403a/.meta.tmp' to config b'/volumes/_nogroup/87b8129a-7de1-44bc-958e-150b793b403a/.meta' Nov 23 05:07:02 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9a00f796-c1f9-4801-901b-d1066a695eff_fbd2935e-439f-4401-9aa6-78d8ed8fda87, sub_name:87b8129a-7de1-44bc-958e-150b793b403a, vol_name:cephfs) < "" Nov 23 05:07:02 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "87b8129a-7de1-44bc-958e-150b793b403a", "snap_name": "9a00f796-c1f9-4801-901b-d1066a695eff", "force": true, "format": "json"}]: dispatch Nov 23 05:07:02 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9a00f796-c1f9-4801-901b-d1066a695eff, sub_name:87b8129a-7de1-44bc-958e-150b793b403a, vol_name:cephfs) < "" Nov 23 05:07:02 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/87b8129a-7de1-44bc-958e-150b793b403a/.meta.tmp' Nov 23 05:07:02 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/87b8129a-7de1-44bc-958e-150b793b403a/.meta.tmp' to config b'/volumes/_nogroup/87b8129a-7de1-44bc-958e-150b793b403a/.meta' Nov 23 05:07:02 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9a00f796-c1f9-4801-901b-d1066a695eff, sub_name:87b8129a-7de1-44bc-958e-150b793b403a, vol_name:cephfs) < "" Nov 23 05:07:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:07:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:07:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:07:02 localhost podman[325247]: 2025-11-23 10:07:02.906508023 +0000 UTC m=+0.094454369 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 23 05:07:02 localhost podman[325249]: 2025-11-23 10:07:02.921434424 +0000 UTC m=+0.102868720 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:07:03 localhost podman[325249]: 2025-11-23 10:07:03.000398126 +0000 UTC m=+0.181832452 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3) Nov 23 05:07:03 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:07:03 localhost podman[325248]: 2025-11-23 10:07:03.014979775 +0000 UTC m=+0.199195247 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 05:07:03 localhost podman[325247]: 2025-11-23 10:07:03.031932617 +0000 UTC m=+0.219878973 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:07:03 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:07:03 localhost podman[325248]: 2025-11-23 10:07:03.048391084 +0000 UTC m=+0.232606546 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 05:07:03 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:07:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:07:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e238 do_prune osdmap full prune enabled Nov 23 05:07:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e239 e239: 6 total, 6 up, 6 in Nov 23 05:07:03 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e239: 6 total, 6 up, 6 in Nov 23 05:07:03 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "format": "json"}]: dispatch Nov 23 05:07:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v505: 177 pgs: 177 active+clean; 742 MiB data, 2.6 GiB used, 39 GiB / 42 GiB avail; 50 KiB/s rd, 15 MiB/s wr, 75 op/s Nov 23 05:07:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 23 05:07:03 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:07:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Nov 23 05:07:03 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 23 05:07:03 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 23 05:07:03 localhost nova_compute[280939]: 2025-11-23 10:07:03.593 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:03 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "format": "json"}]: dispatch Nov 23 05:07:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:07:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:07:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:07:03 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4047953706' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:07:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:07:03 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4047953706' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:07:04 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:07:04 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 23 05:07:04 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 23 05:07:05 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3e36dbf7-3485-45bc-a6bc-a043d0884a88", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:07:05 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3e36dbf7-3485-45bc-a6bc-a043d0884a88, vol_name:cephfs) < "" Nov 23 05:07:05 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3e36dbf7-3485-45bc-a6bc-a043d0884a88/.meta.tmp' Nov 23 05:07:05 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3e36dbf7-3485-45bc-a6bc-a043d0884a88/.meta.tmp' to config b'/volumes/_nogroup/3e36dbf7-3485-45bc-a6bc-a043d0884a88/.meta' Nov 23 05:07:05 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3e36dbf7-3485-45bc-a6bc-a043d0884a88, vol_name:cephfs) < "" Nov 23 05:07:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v506: 177 pgs: 177 active+clean; 870 MiB data, 3.0 GiB used, 39 GiB / 42 GiB avail; 91 KiB/s rd, 31 MiB/s wr, 142 op/s Nov 23 05:07:05 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3e36dbf7-3485-45bc-a6bc-a043d0884a88", "format": "json"}]: dispatch Nov 23 05:07:05 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3e36dbf7-3485-45bc-a6bc-a043d0884a88, vol_name:cephfs) < "" Nov 23 05:07:05 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3e36dbf7-3485-45bc-a6bc-a043d0884a88, vol_name:cephfs) < "" Nov 23 05:07:05 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:07:05 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:07:05 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "87b8129a-7de1-44bc-958e-150b793b403a", "format": "json"}]: dispatch Nov 23 05:07:05 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:87b8129a-7de1-44bc-958e-150b793b403a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:07:05 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:87b8129a-7de1-44bc-958e-150b793b403a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:07:05 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:07:05.723+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '87b8129a-7de1-44bc-958e-150b793b403a' of type subvolume Nov 23 05:07:05 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '87b8129a-7de1-44bc-958e-150b793b403a' of type subvolume Nov 23 05:07:05 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "87b8129a-7de1-44bc-958e-150b793b403a", "force": true, "format": "json"}]: dispatch Nov 23 05:07:05 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:87b8129a-7de1-44bc-958e-150b793b403a, vol_name:cephfs) < "" Nov 23 05:07:05 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/87b8129a-7de1-44bc-958e-150b793b403a'' moved to trashcan Nov 23 05:07:05 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:07:05 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:87b8129a-7de1-44bc-958e-150b793b403a, vol_name:cephfs) < "" Nov 23 05:07:06 localhost sshd[325317]: main: sshd: ssh-rsa algorithm is disabled Nov 23 05:07:06 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e239 do_prune osdmap full prune enabled Nov 23 05:07:06 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e240 e240: 6 total, 6 up, 6 in Nov 23 05:07:06 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e240: 6 total, 6 up, 6 in Nov 23 05:07:06 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:07:06 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:06 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 23 05:07:06 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:07:06 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice_bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:07:06 localhost openstack_network_exporter[241732]: ERROR 10:07:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:07:06 localhost openstack_network_exporter[241732]: ERROR 10:07:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:07:06 localhost openstack_network_exporter[241732]: ERROR 10:07:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:07:06 localhost openstack_network_exporter[241732]: ERROR 10:07:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:07:06 localhost openstack_network_exporter[241732]: Nov 23 05:07:06 localhost openstack_network_exporter[241732]: ERROR 10:07:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:07:06 localhost openstack_network_exporter[241732]: Nov 23 05:07:06 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:07:06 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:06 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:06 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:06 localhost nova_compute[280939]: 2025-11-23 10:07:06.988 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v508: 177 pgs: 177 active+clean; 870 MiB data, 3.0 GiB used, 39 GiB / 42 GiB avail; 41 KiB/s rd, 16 MiB/s wr, 66 op/s Nov 23 05:07:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e240 do_prune osdmap full prune enabled Nov 23 05:07:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e241 e241: 6 total, 6 up, 6 in Nov 23 05:07:07 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e241: 6 total, 6 up, 6 in Nov 23 05:07:07 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:07:07 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:07 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0. Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:07:08.184764) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61 Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892428184801, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1586, "num_deletes": 271, "total_data_size": 1558696, "memory_usage": 1595456, "flush_reason": "Manual Compaction"} Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892428194331, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 1529660, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33481, "largest_seqno": 35066, "table_properties": {"data_size": 1522091, "index_size": 4398, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 18108, "raw_average_key_size": 21, "raw_value_size": 1506165, "raw_average_value_size": 1821, "num_data_blocks": 185, "num_entries": 827, "num_filter_entries": 827, "num_deletions": 271, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892363, "oldest_key_time": 1763892363, "file_creation_time": 1763892428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}} Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 9610 microseconds, and 4749 cpu microseconds. Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:07:08.194377) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 1529660 bytes OK Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:07:08.194396) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:07:08.196415) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:07:08.196437) EVENT_LOG_v1 {"time_micros": 1763892428196431, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:07:08.196458) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1551199, prev total WAL file size 1551523, number of live WAL files 2. Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:07:08.197120) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323736' seq:72057594037927935, type:22 .. '6C6F676D0034353330' seq:0, type:0; will stop at (end) Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(1493KB)], [60(17MB)] Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892428197164, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 19406323, "oldest_snapshot_seqno": -1} Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 13510 keys, 18794353 bytes, temperature: kUnknown Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892428273828, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 18794353, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18715773, "index_size": 43727, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33797, "raw_key_size": 364125, "raw_average_key_size": 26, "raw_value_size": 18484122, "raw_average_value_size": 1368, "num_data_blocks": 1631, "num_entries": 13510, "num_filter_entries": 13510, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763892428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}} Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:07:08.274197) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 18794353 bytes Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:07:08.275974) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 252.7 rd, 244.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 17.0 +0.0 blob) out(17.9 +0.0 blob), read-write-amplify(25.0) write-amplify(12.3) OK, records in: 14073, records dropped: 563 output_compression: NoCompression Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:07:08.276023) EVENT_LOG_v1 {"time_micros": 1763892428276010, "job": 36, "event": "compaction_finished", "compaction_time_micros": 76787, "compaction_time_cpu_micros": 47093, "output_level": 6, "num_output_files": 1, "total_output_size": 18794353, "num_input_records": 14073, "num_output_records": 13510, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892428276321, "job": 36, "event": "table_file_deletion", "file_number": 62} Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892428278807, "job": 36, "event": "table_file_deletion", "file_number": 60} Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:07:08.197019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:07:08.278842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:07:08.278847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:07:08.278850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:07:08.278853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:07:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:07:08.278855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:07:08 localhost nova_compute[280939]: 2025-11-23 10:07:08.632 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:08 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "39ff48c9-2d4a-44e8-8b2a-3315c482323f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:07:08 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:39ff48c9-2d4a-44e8-8b2a-3315c482323f, vol_name:cephfs) < "" Nov 23 05:07:09 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/39ff48c9-2d4a-44e8-8b2a-3315c482323f/.meta.tmp' Nov 23 05:07:09 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/39ff48c9-2d4a-44e8-8b2a-3315c482323f/.meta.tmp' to config b'/volumes/_nogroup/39ff48c9-2d4a-44e8-8b2a-3315c482323f/.meta' Nov 23 05:07:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:39ff48c9-2d4a-44e8-8b2a-3315c482323f, vol_name:cephfs) < "" Nov 23 05:07:09 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "39ff48c9-2d4a-44e8-8b2a-3315c482323f", "format": "json"}]: dispatch Nov 23 05:07:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:39ff48c9-2d4a-44e8-8b2a-3315c482323f, vol_name:cephfs) < "" Nov 23 05:07:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:39ff48c9-2d4a-44e8-8b2a-3315c482323f, vol_name:cephfs) < "" Nov 23 05:07:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:07:09 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:07:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v510: 177 pgs: 177 active+clean; 982 MiB data, 3.3 GiB used, 39 GiB / 42 GiB avail; 92 KiB/s rd, 38 MiB/s wr, 151 op/s Nov 23 05:07:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:09.747 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:07:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:09.747 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:07:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:09.748 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:07:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:07:09 localhost systemd[1]: tmp-crun.ZNARDu.mount: Deactivated successfully. Nov 23 05:07:09 localhost podman[325319]: 2025-11-23 10:07:09.888118623 +0000 UTC m=+0.076903660 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, release=1755695350, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter) Nov 23 05:07:09 localhost podman[325319]: 2025-11-23 10:07:09.899524714 +0000 UTC m=+0.088309752 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter) Nov 23 05:07:09 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:07:10 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 23 05:07:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:10 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 23 05:07:10 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:07:10 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Nov 23 05:07:10 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 23 05:07:10 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 23 05:07:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:10 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 23 05:07:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:10 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:07:10 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:07:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:10 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:07:10 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 23 05:07:10 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 23 05:07:11 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6f7ac182-1c2a-4635-8587-b5ecb5678c06", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:07:11 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6f7ac182-1c2a-4635-8587-b5ecb5678c06, vol_name:cephfs) < "" Nov 23 05:07:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v511: 177 pgs: 177 active+clean; 982 MiB data, 3.3 GiB used, 39 GiB / 42 GiB avail; 73 KiB/s rd, 30 MiB/s wr, 120 op/s Nov 23 05:07:11 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6f7ac182-1c2a-4635-8587-b5ecb5678c06/.meta.tmp' Nov 23 05:07:11 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6f7ac182-1c2a-4635-8587-b5ecb5678c06/.meta.tmp' to config b'/volumes/_nogroup/6f7ac182-1c2a-4635-8587-b5ecb5678c06/.meta' Nov 23 05:07:11 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6f7ac182-1c2a-4635-8587-b5ecb5678c06, vol_name:cephfs) < "" Nov 23 05:07:11 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6f7ac182-1c2a-4635-8587-b5ecb5678c06", "format": "json"}]: dispatch Nov 23 05:07:11 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6f7ac182-1c2a-4635-8587-b5ecb5678c06, vol_name:cephfs) < "" Nov 23 05:07:11 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6f7ac182-1c2a-4635-8587-b5ecb5678c06, vol_name:cephfs) < "" Nov 23 05:07:11 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:07:11 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:07:11 localhost nova_compute[280939]: 2025-11-23 10:07:11.989 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:12 localhost nova_compute[280939]: 2025-11-23 10:07:12.316 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:07:12 localhost nova_compute[280939]: 2025-11-23 10:07:12.316 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:07:12 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "39ff48c9-2d4a-44e8-8b2a-3315c482323f", "format": "json"}]: dispatch Nov 23 05:07:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:39ff48c9-2d4a-44e8-8b2a-3315c482323f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:07:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:39ff48c9-2d4a-44e8-8b2a-3315c482323f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:07:12 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:07:12.396+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '39ff48c9-2d4a-44e8-8b2a-3315c482323f' of type subvolume Nov 23 05:07:12 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '39ff48c9-2d4a-44e8-8b2a-3315c482323f' of type subvolume Nov 23 05:07:12 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "39ff48c9-2d4a-44e8-8b2a-3315c482323f", "force": true, "format": "json"}]: dispatch Nov 23 05:07:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:39ff48c9-2d4a-44e8-8b2a-3315c482323f, vol_name:cephfs) < "" Nov 23 05:07:12 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/39ff48c9-2d4a-44e8-8b2a-3315c482323f'' moved to trashcan Nov 23 05:07:12 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:07:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:39ff48c9-2d4a-44e8-8b2a-3315c482323f, vol_name:cephfs) < "" Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.579 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:07:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:07:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:07:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e241 do_prune osdmap full prune enabled Nov 23 05:07:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e242 e242: 6 total, 6 up, 6 in Nov 23 05:07:13 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e242: 6 total, 6 up, 6 in Nov 23 05:07:13 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "r", "format": "json"}]: dispatch Nov 23 05:07:13 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 23 05:07:13 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:07:13 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice_bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:07:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v513: 177 pgs: 177 active+clean; 982 MiB data, 3.3 GiB used, 39 GiB / 42 GiB avail; 37 KiB/s rd, 16 MiB/s wr, 62 op/s Nov 23 05:07:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:07:13 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:13 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:13 localhost nova_compute[280939]: 2025-11-23 10:07:13.669 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:13 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:14 localhost nova_compute[280939]: 2025-11-23 10:07:14.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:07:14 localhost nova_compute[280939]: 2025-11-23 10:07:14.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 05:07:14 localhost nova_compute[280939]: 2025-11-23 10:07:14.134 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 05:07:14 localhost nova_compute[280939]: 2025-11-23 10:07:14.149 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 05:07:14 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:07:14 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:14 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:07:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:07:14 localhost podman[325343]: 2025-11-23 10:07:14.917170716 +0000 UTC m=+0.091938194 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:07:14 localhost podman[325342]: 2025-11-23 10:07:14.982821027 +0000 UTC m=+0.161468804 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:07:15 localhost podman[325343]: 2025-11-23 10:07:15.002579046 +0000 UTC m=+0.177346514 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:07:15 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:07:15 localhost podman[325342]: 2025-11-23 10:07:15.021601812 +0000 UTC m=+0.200249599 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:07:15 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:07:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v514: 177 pgs: 177 active+clean; 1.1 GiB data, 3.7 GiB used, 38 GiB / 42 GiB avail; 83 KiB/s rd, 29 MiB/s wr, 133 op/s Nov 23 05:07:15 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3e36dbf7-3485-45bc-a6bc-a043d0884a88", "format": "json"}]: dispatch Nov 23 05:07:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3e36dbf7-3485-45bc-a6bc-a043d0884a88, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:07:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3e36dbf7-3485-45bc-a6bc-a043d0884a88, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:07:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:07:15.649+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3e36dbf7-3485-45bc-a6bc-a043d0884a88' of type subvolume Nov 23 05:07:15 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3e36dbf7-3485-45bc-a6bc-a043d0884a88' of type subvolume Nov 23 05:07:15 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3e36dbf7-3485-45bc-a6bc-a043d0884a88", "force": true, "format": "json"}]: dispatch Nov 23 05:07:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3e36dbf7-3485-45bc-a6bc-a043d0884a88, vol_name:cephfs) < "" Nov 23 05:07:15 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3e36dbf7-3485-45bc-a6bc-a043d0884a88'' moved to trashcan Nov 23 05:07:15 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:07:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3e36dbf7-3485-45bc-a6bc-a043d0884a88, vol_name:cephfs) < "" Nov 23 05:07:15 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6f7ac182-1c2a-4635-8587-b5ecb5678c06", "format": "json"}]: dispatch Nov 23 05:07:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6f7ac182-1c2a-4635-8587-b5ecb5678c06, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:07:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6f7ac182-1c2a-4635-8587-b5ecb5678c06, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:07:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:07:15.983+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6f7ac182-1c2a-4635-8587-b5ecb5678c06' of type subvolume Nov 23 05:07:15 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6f7ac182-1c2a-4635-8587-b5ecb5678c06' of type subvolume Nov 23 05:07:15 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6f7ac182-1c2a-4635-8587-b5ecb5678c06", "force": true, "format": "json"}]: dispatch Nov 23 05:07:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6f7ac182-1c2a-4635-8587-b5ecb5678c06, vol_name:cephfs) < "" Nov 23 05:07:16 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6f7ac182-1c2a-4635-8587-b5ecb5678c06'' moved to trashcan Nov 23 05:07:16 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:07:16 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6f7ac182-1c2a-4635-8587-b5ecb5678c06, vol_name:cephfs) < "" Nov 23 05:07:16 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 23 05:07:16 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:16 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 23 05:07:16 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:07:16 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Nov 23 05:07:16 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 23 05:07:16 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 23 05:07:16 localhost nova_compute[280939]: 2025-11-23 10:07:16.991 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:17 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:17 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 23 05:07:17 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:17 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:07:17 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:07:17 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:17 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:07:17.069 262301 INFO neutron.agent.linux.ip_lib [None req-53f6bf48-fc23-4731-bc4a-a8c0d47acb39 - - - - - -] Device tap50620687-19 cannot be used as it has no MAC address#033[00m Nov 23 05:07:17 localhost podman[239764]: time="2025-11-23T10:07:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:07:17 localhost nova_compute[280939]: 2025-11-23 10:07:17.092 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:17 localhost kernel: device tap50620687-19 entered promiscuous mode Nov 23 05:07:17 localhost podman[239764]: @ - - [23/Nov/2025:10:07:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:07:17 localhost NetworkManager[5966]: [1763892437.1046] manager: (tap50620687-19): new Generic device (/org/freedesktop/NetworkManager/Devices/57) Nov 23 05:07:17 localhost systemd-udevd[325394]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:07:17 localhost ovn_controller[153771]: 2025-11-23T10:07:17Z|00348|binding|INFO|Claiming lport 50620687-1913-45f3-91da-4f90a2f74130 for this chassis. Nov 23 05:07:17 localhost ovn_controller[153771]: 2025-11-23T10:07:17Z|00349|binding|INFO|50620687-1913-45f3-91da-4f90a2f74130: Claiming unknown Nov 23 05:07:17 localhost nova_compute[280939]: 2025-11-23 10:07:17.111 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:17 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:17.125 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-e78ff008-0a31-4b50-b91d-159757c7a740', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e78ff008-0a31-4b50-b91d-159757c7a740', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '353e3a021df940c0838bf3dc0b3f265a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62148aa2-644a-479b-9738-8e67e725dbbb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=50620687-1913-45f3-91da-4f90a2f74130) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:07:17 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:17.128 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 50620687-1913-45f3-91da-4f90a2f74130 in datapath e78ff008-0a31-4b50-b91d-159757c7a740 bound to our chassis#033[00m Nov 23 05:07:17 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:17.130 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port 67cc631e-4483-4507-8a0b-c749b9e11f15 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 05:07:17 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:17.130 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e78ff008-0a31-4b50-b91d-159757c7a740, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:07:17 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:17.131 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[d54bbab6-af8a-44cb-a2d0-d689b383799d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:07:17 localhost nova_compute[280939]: 2025-11-23 10:07:17.131 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:07:17 localhost journal[229336]: ethtool ioctl error on tap50620687-19: No such device Nov 23 05:07:17 localhost ovn_controller[153771]: 2025-11-23T10:07:17Z|00350|binding|INFO|Setting lport 50620687-1913-45f3-91da-4f90a2f74130 ovn-installed in OVS Nov 23 05:07:17 localhost ovn_controller[153771]: 2025-11-23T10:07:17Z|00351|binding|INFO|Setting lport 50620687-1913-45f3-91da-4f90a2f74130 up in Southbound Nov 23 05:07:17 localhost journal[229336]: ethtool ioctl error on tap50620687-19: No such device Nov 23 05:07:17 localhost journal[229336]: ethtool ioctl error on tap50620687-19: No such device Nov 23 05:07:17 localhost nova_compute[280939]: 2025-11-23 10:07:17.144 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:17 localhost podman[239764]: @ - - [23/Nov/2025:10:07:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18744 "" "Go-http-client/1.1" Nov 23 05:07:17 localhost journal[229336]: ethtool ioctl error on tap50620687-19: No such device Nov 23 05:07:17 localhost journal[229336]: ethtool ioctl error on tap50620687-19: No such device Nov 23 05:07:17 localhost journal[229336]: ethtool ioctl error on tap50620687-19: No such device Nov 23 05:07:17 localhost journal[229336]: ethtool ioctl error on tap50620687-19: No such device Nov 23 05:07:17 localhost journal[229336]: ethtool ioctl error on tap50620687-19: No such device Nov 23 05:07:17 localhost nova_compute[280939]: 2025-11-23 10:07:17.187 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:17 localhost nova_compute[280939]: 2025-11-23 10:07:17.214 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:17 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:07:17 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 23 05:07:17 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 23 05:07:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v515: 177 pgs: 177 active+clean; 1.1 GiB data, 3.7 GiB used, 38 GiB / 42 GiB avail; 67 KiB/s rd, 23 MiB/s wr, 107 op/s Nov 23 05:07:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:07:18 localhost nova_compute[280939]: 2025-11-23 10:07:18.703 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:18 localhost podman[325465]: Nov 23 05:07:18 localhost podman[325465]: 2025-11-23 10:07:18.722011338 +0000 UTC m=+0.090159668 container create f968c6075d0b38e72781ccba827757cf2a117918f8d219adf1daf1111bd53124 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e78ff008-0a31-4b50-b91d-159757c7a740, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Nov 23 05:07:18 localhost systemd[1]: Started libpod-conmon-f968c6075d0b38e72781ccba827757cf2a117918f8d219adf1daf1111bd53124.scope. Nov 23 05:07:18 localhost podman[325465]: 2025-11-23 10:07:18.677748875 +0000 UTC m=+0.045897225 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:07:18 localhost systemd[1]: Started libcrun container. Nov 23 05:07:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c9bbead5474818129c24c620286f28511831bf83f9960d830417f795c5bea4a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:07:18 localhost podman[325465]: 2025-11-23 10:07:18.799821986 +0000 UTC m=+0.167970286 container init f968c6075d0b38e72781ccba827757cf2a117918f8d219adf1daf1111bd53124 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e78ff008-0a31-4b50-b91d-159757c7a740, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS) Nov 23 05:07:18 localhost podman[325465]: 2025-11-23 10:07:18.811188636 +0000 UTC m=+0.179336936 container start f968c6075d0b38e72781ccba827757cf2a117918f8d219adf1daf1111bd53124 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e78ff008-0a31-4b50-b91d-159757c7a740, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 05:07:18 localhost dnsmasq[325483]: started, version 2.85 cachesize 150 Nov 23 05:07:18 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "22bdd1a7-cbe9-40dd-ac66-a0446867da56", "mode": "0755", "format": "json"}]: dispatch Nov 23 05:07:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:22bdd1a7-cbe9-40dd-ac66-a0446867da56, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Nov 23 05:07:18 localhost dnsmasq[325483]: DNS service limited to local subnets Nov 23 05:07:18 localhost dnsmasq[325483]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:07:18 localhost dnsmasq[325483]: warning: no upstream servers configured Nov 23 05:07:18 localhost dnsmasq-dhcp[325483]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:07:18 localhost dnsmasq[325483]: read /var/lib/neutron/dhcp/e78ff008-0a31-4b50-b91d-159757c7a740/addn_hosts - 0 addresses Nov 23 05:07:18 localhost dnsmasq-dhcp[325483]: read /var/lib/neutron/dhcp/e78ff008-0a31-4b50-b91d-159757c7a740/host Nov 23 05:07:18 localhost dnsmasq-dhcp[325483]: read /var/lib/neutron/dhcp/e78ff008-0a31-4b50-b91d-159757c7a740/opts Nov 23 05:07:18 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:07:18.928 262301 INFO neutron.agent.dhcp.agent [None req-ad8982e7-8a82-49f0-8d90-24ba9d946198 - - - - - -] DHCP configuration for ports {'f0eb1a89-593c-4d0c-83fd-6d72e4f45e34'} is completed#033[00m Nov 23 05:07:19 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:07:19.084 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:07:18Z, description=, device_id=27f60ba5-7556-40d8-99c6-791c5d4b29f0, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=da612928-ae83-4029-ba7e-d9ea0aaef679, ip_allocation=immediate, mac_address=fa:16:3e:87:da:92, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:07:14Z, description=, dns_domain=, id=e78ff008-0a31-4b50-b91d-159757c7a740, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PrometheusGabbiTest-1172186575-network, port_security_enabled=True, project_id=353e3a021df940c0838bf3dc0b3f265a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12468, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3359, status=ACTIVE, subnets=['7da32569-515c-4a27-9621-f8fc6c60b7c9'], tags=[], tenant_id=353e3a021df940c0838bf3dc0b3f265a, updated_at=2025-11-23T10:07:15Z, vlan_transparent=None, network_id=e78ff008-0a31-4b50-b91d-159757c7a740, port_security_enabled=False, project_id=353e3a021df940c0838bf3dc0b3f265a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3367, status=DOWN, tags=[], tenant_id=353e3a021df940c0838bf3dc0b3f265a, updated_at=2025-11-23T10:07:18Z on network e78ff008-0a31-4b50-b91d-159757c7a740#033[00m Nov 23 05:07:19 localhost nova_compute[280939]: 2025-11-23 10:07:19.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:07:19 localhost dnsmasq[325483]: read /var/lib/neutron/dhcp/e78ff008-0a31-4b50-b91d-159757c7a740/addn_hosts - 1 addresses Nov 23 05:07:19 localhost dnsmasq-dhcp[325483]: read /var/lib/neutron/dhcp/e78ff008-0a31-4b50-b91d-159757c7a740/host Nov 23 05:07:19 localhost podman[325501]: 2025-11-23 10:07:19.286378203 +0000 UTC m=+0.057111080 container kill f968c6075d0b38e72781ccba827757cf2a117918f8d219adf1daf1111bd53124 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e78ff008-0a31-4b50-b91d-159757c7a740, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Nov 23 05:07:19 localhost dnsmasq-dhcp[325483]: read /var/lib/neutron/dhcp/e78ff008-0a31-4b50-b91d-159757c7a740/opts Nov 23 05:07:19 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:07:19.488 262301 INFO neutron.agent.dhcp.agent [None req-68b7eff7-8969-4761-aece-2ee5f65a71bc - - - - - -] DHCP configuration for ports {'da612928-ae83-4029-ba7e-d9ea0aaef679'} is completed#033[00m Nov 23 05:07:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v516: 177 pgs: 177 active+clean; 1.2 GiB data, 4.0 GiB used, 38 GiB / 42 GiB avail; 2.1 MiB/s rd, 22 MiB/s wr, 89 op/s Nov 23 05:07:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:22bdd1a7-cbe9-40dd-ac66-a0446867da56, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Nov 23 05:07:19 localhost systemd[1]: tmp-crun.ZING9O.mount: Deactivated successfully. Nov 23 05:07:19 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:07:19.850 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:07:18Z, description=, device_id=27f60ba5-7556-40d8-99c6-791c5d4b29f0, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=da612928-ae83-4029-ba7e-d9ea0aaef679, ip_allocation=immediate, mac_address=fa:16:3e:87:da:92, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:07:14Z, description=, dns_domain=, id=e78ff008-0a31-4b50-b91d-159757c7a740, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PrometheusGabbiTest-1172186575-network, port_security_enabled=True, project_id=353e3a021df940c0838bf3dc0b3f265a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12468, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3359, status=ACTIVE, subnets=['7da32569-515c-4a27-9621-f8fc6c60b7c9'], tags=[], tenant_id=353e3a021df940c0838bf3dc0b3f265a, updated_at=2025-11-23T10:07:15Z, vlan_transparent=None, network_id=e78ff008-0a31-4b50-b91d-159757c7a740, port_security_enabled=False, project_id=353e3a021df940c0838bf3dc0b3f265a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3367, status=DOWN, tags=[], tenant_id=353e3a021df940c0838bf3dc0b3f265a, updated_at=2025-11-23T10:07:18Z on network e78ff008-0a31-4b50-b91d-159757c7a740#033[00m Nov 23 05:07:20 localhost systemd[1]: tmp-crun.iyVU2J.mount: Deactivated successfully. Nov 23 05:07:20 localhost dnsmasq[325483]: read /var/lib/neutron/dhcp/e78ff008-0a31-4b50-b91d-159757c7a740/addn_hosts - 1 addresses Nov 23 05:07:20 localhost dnsmasq-dhcp[325483]: read /var/lib/neutron/dhcp/e78ff008-0a31-4b50-b91d-159757c7a740/host Nov 23 05:07:20 localhost dnsmasq-dhcp[325483]: read /var/lib/neutron/dhcp/e78ff008-0a31-4b50-b91d-159757c7a740/opts Nov 23 05:07:20 localhost podman[325539]: 2025-11-23 10:07:20.060370005 +0000 UTC m=+0.062390244 container kill f968c6075d0b38e72781ccba827757cf2a117918f8d219adf1daf1111bd53124 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e78ff008-0a31-4b50-b91d-159757c7a740, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 23 05:07:20 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:07:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 23 05:07:20 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:07:20 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:07:20 localhost nova_compute[280939]: 2025-11-23 10:07:20.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:07:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:07:20 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:20 localhost nova_compute[280939]: 2025-11-23 10:07:20.134 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 05:07:20 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:20 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:07:20.354 262301 INFO neutron.agent.dhcp.agent [None req-50cd2f51-06d8-4003-ab5f-c195b46cadf3 - - - - - -] DHCP configuration for ports {'da612928-ae83-4029-ba7e-d9ea0aaef679'} is completed#033[00m Nov 23 05:07:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e242 do_prune osdmap full prune enabled Nov 23 05:07:20 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:07:20 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:20 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e243 e243: 6 total, 6 up, 6 in Nov 23 05:07:20 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e243: 6 total, 6 up, 6 in Nov 23 05:07:21 localhost ovn_controller[153771]: 2025-11-23T10:07:21Z|00352|ovn_bfd|INFO|Enabled BFD on interface ovn-c237ed-0 Nov 23 05:07:21 localhost ovn_controller[153771]: 2025-11-23T10:07:21Z|00353|ovn_bfd|INFO|Enabled BFD on interface ovn-49b8a0-0 Nov 23 05:07:21 localhost ovn_controller[153771]: 2025-11-23T10:07:21Z|00354|ovn_bfd|INFO|Enabled BFD on interface ovn-b7d5b3-0 Nov 23 05:07:21 localhost nova_compute[280939]: 2025-11-23 10:07:21.253 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:21 localhost nova_compute[280939]: 2025-11-23 10:07:21.269 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:21 localhost nova_compute[280939]: 2025-11-23 10:07:21.273 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:21 localhost nova_compute[280939]: 2025-11-23 10:07:21.291 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:21 localhost nova_compute[280939]: 2025-11-23 10:07:21.299 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:21 localhost nova_compute[280939]: 2025-11-23 10:07:21.304 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v518: 177 pgs: 177 active+clean; 1.2 GiB data, 4.0 GiB used, 38 GiB / 42 GiB avail; 2.5 MiB/s rd, 27 MiB/s wr, 106 op/s Nov 23 05:07:21 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "22bdd1a7-cbe9-40dd-ac66-a0446867da56", "force": true, "format": "json"}]: dispatch Nov 23 05:07:21 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:22bdd1a7-cbe9-40dd-ac66-a0446867da56, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Nov 23 05:07:21 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:22bdd1a7-cbe9-40dd-ac66-a0446867da56, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Nov 23 05:07:21 localhost nova_compute[280939]: 2025-11-23 10:07:21.994 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:22 localhost nova_compute[280939]: 2025-11-23 10:07:22.235 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:22 localhost nova_compute[280939]: 2025-11-23 10:07:22.278 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e243 do_prune osdmap full prune enabled Nov 23 05:07:22 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e244 e244: 6 total, 6 up, 6 in Nov 23 05:07:22 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e244: 6 total, 6 up, 6 in Nov 23 05:07:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:07:23 localhost nova_compute[280939]: 2025-11-23 10:07:23.221 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_10:07:23 Nov 23 05:07:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 05:07:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 05:07:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['backups', '.mgr', 'vms', 'images', 'manila_data', 'manila_metadata', 'volumes'] Nov 23 05:07:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 05:07:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:07:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:07:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:07:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:07:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:07:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Nov 23 05:07:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 23 05:07:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 23 05:07:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v520: 177 pgs: 177 active+clean; 1.2 GiB data, 4.0 GiB used, 38 GiB / 42 GiB avail; 2.6 MiB/s rd, 13 MiB/s wr, 31 op/s Nov 23 05:07:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 05:07:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 05:07:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:07:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:07:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 05:07:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:07:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Nov 23 05:07:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:07:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014871994521217196 of space, bias 1.0, pg target 0.2969441572736367 quantized to 32 (current 32) Nov 23 05:07:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:07:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0752100345477387 of space, bias 1.0, pg target 15.016936898031828 quantized to 32 (current 32) Nov 23 05:07:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:07:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.034576819281593e-05 quantized to 32 (current 32) Nov 23 05:07:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:07:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.0002517288409640797 quantized to 32 (current 32) Nov 23 05:07:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:07:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0005403536781127861 of space, bias 4.0, pg target 0.39914125023264463 quantized to 16 (current 16) Nov 23 05:07:23 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 23 05:07:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 05:07:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:07:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:07:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:07:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:07:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:07:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:07:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:07:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 23 05:07:23 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:07:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Nov 23 05:07:23 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 23 05:07:23 localhost nova_compute[280939]: 2025-11-23 10:07:23.705 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:23 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 23 05:07:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:23 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 23 05:07:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:07:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:07:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:24 localhost nova_compute[280939]: 2025-11-23 10:07:24.129 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:07:24 localhost nova_compute[280939]: 2025-11-23 10:07:24.131 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:07:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:07:24 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/987695658' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:07:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:07:24 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/987695658' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:07:24 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:07:24 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 23 05:07:24 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 23 05:07:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e50: np0005532584.naxwxy(active, since 12m), standbys: np0005532585.gzafiw, np0005532586.thmvqb Nov 23 05:07:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:07:24 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2281623741' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:07:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:07:24 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2281623741' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:07:25 localhost nova_compute[280939]: 2025-11-23 10:07:25.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:07:25 localhost nova_compute[280939]: 2025-11-23 10:07:25.149 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:07:25 localhost nova_compute[280939]: 2025-11-23 10:07:25.149 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:07:25 localhost nova_compute[280939]: 2025-11-23 10:07:25.149 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:07:25 localhost nova_compute[280939]: 2025-11-23 10:07:25.150 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 05:07:25 localhost nova_compute[280939]: 2025-11-23 10:07:25.150 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:07:25 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "33decdd7-a730-4b1d-9649-96ef50bb9878", "mode": "0755", "format": "json"}]: dispatch Nov 23 05:07:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:33decdd7-a730-4b1d-9649-96ef50bb9878, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Nov 23 05:07:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:33decdd7-a730-4b1d-9649-96ef50bb9878, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Nov 23 05:07:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v521: 177 pgs: 177 active+clean; 246 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 18 MiB/s wr, 193 op/s Nov 23 05:07:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:07:25 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1381072346' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:07:25 localhost nova_compute[280939]: 2025-11-23 10:07:25.613 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:07:25 localhost nova_compute[280939]: 2025-11-23 10:07:25.813 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 05:07:25 localhost nova_compute[280939]: 2025-11-23 10:07:25.815 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11454MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 05:07:25 localhost nova_compute[280939]: 2025-11-23 10:07:25.816 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:07:25 localhost nova_compute[280939]: 2025-11-23 10:07:25.816 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:07:25 localhost nova_compute[280939]: 2025-11-23 10:07:25.901 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 05:07:25 localhost nova_compute[280939]: 2025-11-23 10:07:25.902 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 05:07:25 localhost nova_compute[280939]: 2025-11-23 10:07:25.921 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:07:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:07:26 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2535752392' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:07:26 localhost nova_compute[280939]: 2025-11-23 10:07:26.373 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:07:26 localhost nova_compute[280939]: 2025-11-23 10:07:26.379 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 05:07:26 localhost nova_compute[280939]: 2025-11-23 10:07:26.410 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 05:07:26 localhost nova_compute[280939]: 2025-11-23 10:07:26.444 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 05:07:26 localhost nova_compute[280939]: 2025-11-23 10:07:26.444 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:07:26 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "r", "format": "json"}]: dispatch Nov 23 05:07:26 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 23 05:07:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:07:26 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:07:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:07:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:07:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:26 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:26 localhost podman[325607]: 2025-11-23 10:07:26.911009889 +0000 UTC m=+0.088902789 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 23 05:07:26 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:26 localhost podman[325607]: 2025-11-23 10:07:26.945422009 +0000 UTC m=+0.123314919 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 23 05:07:26 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:07:27 localhost nova_compute[280939]: 2025-11-23 10:07:27.025 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v522: 177 pgs: 177 active+clean; 246 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 98 KiB/s rd, 4.7 MiB/s wr, 161 op/s Nov 23 05:07:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:07:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:07:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e244 do_prune osdmap full prune enabled Nov 23 05:07:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e245 e245: 6 total, 6 up, 6 in Nov 23 05:07:28 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e245: 6 total, 6 up, 6 in Nov 23 05:07:28 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "33decdd7-a730-4b1d-9649-96ef50bb9878", "force": true, "format": "json"}]: dispatch Nov 23 05:07:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:33decdd7-a730-4b1d-9649-96ef50bb9878, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Nov 23 05:07:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:33decdd7-a730-4b1d-9649-96ef50bb9878, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Nov 23 05:07:28 localhost nova_compute[280939]: 2025-11-23 10:07:28.745 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v524: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 112 KiB/s rd, 4.8 MiB/s wr, 187 op/s Nov 23 05:07:30 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 23 05:07:30 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 23 05:07:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:07:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Nov 23 05:07:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 23 05:07:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 23 05:07:30 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:30 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 23 05:07:30 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:30 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:07:30 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:07:30 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:30 localhost nova_compute[280939]: 2025-11-23 10:07:30.440 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:07:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:07:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 23 05:07:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 23 05:07:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v525: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 101 KiB/s rd, 4.3 MiB/s wr, 169 op/s Nov 23 05:07:31 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9757f6f1-7e41-4cb6-a23b-fdd5de1e9889", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:07:31 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9757f6f1-7e41-4cb6-a23b-fdd5de1e9889, vol_name:cephfs) < "" Nov 23 05:07:31 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9757f6f1-7e41-4cb6-a23b-fdd5de1e9889/.meta.tmp' Nov 23 05:07:31 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9757f6f1-7e41-4cb6-a23b-fdd5de1e9889/.meta.tmp' to config b'/volumes/_nogroup/9757f6f1-7e41-4cb6-a23b-fdd5de1e9889/.meta' Nov 23 05:07:31 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9757f6f1-7e41-4cb6-a23b-fdd5de1e9889, vol_name:cephfs) < "" Nov 23 05:07:31 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9757f6f1-7e41-4cb6-a23b-fdd5de1e9889", "format": "json"}]: dispatch Nov 23 05:07:31 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9757f6f1-7e41-4cb6-a23b-fdd5de1e9889, vol_name:cephfs) < "" Nov 23 05:07:31 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9757f6f1-7e41-4cb6-a23b-fdd5de1e9889, vol_name:cephfs) < "" Nov 23 05:07:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:07:31 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:07:32 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:32.028 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:07:32 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:32.029 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 05:07:32 localhost nova_compute[280939]: 2025-11-23 10:07:32.061 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:32 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:07:32.354 262301 INFO neutron.agent.linux.ip_lib [None req-ff7a262f-e474-43e9-a590-e89639be4672 - - - - - -] Device tap68ca76ce-ff cannot be used as it has no MAC address#033[00m Nov 23 05:07:32 localhost nova_compute[280939]: 2025-11-23 10:07:32.378 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:32 localhost kernel: device tap68ca76ce-ff entered promiscuous mode Nov 23 05:07:32 localhost NetworkManager[5966]: [1763892452.3921] manager: (tap68ca76ce-ff): new Generic device (/org/freedesktop/NetworkManager/Devices/58) Nov 23 05:07:32 localhost ovn_controller[153771]: 2025-11-23T10:07:32Z|00355|binding|INFO|Claiming lport 68ca76ce-ff51-47e8-a962-3d13341d1c99 for this chassis. Nov 23 05:07:32 localhost ovn_controller[153771]: 2025-11-23T10:07:32Z|00356|binding|INFO|68ca76ce-ff51-47e8-a962-3d13341d1c99: Claiming unknown Nov 23 05:07:32 localhost systemd-udevd[325635]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:07:32 localhost nova_compute[280939]: 2025-11-23 10:07:32.395 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:32 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:32.404 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/16', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-de7ea64f-b6be-4aab-b701-9361e8b84557', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de7ea64f-b6be-4aab-b701-9361e8b84557', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f48fa865c4047a080902678e51be06e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c292f20-3952-4b3e-9deb-abb97dabd42f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=68ca76ce-ff51-47e8-a962-3d13341d1c99) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:07:32 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:32.406 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 68ca76ce-ff51-47e8-a962-3d13341d1c99 in datapath de7ea64f-b6be-4aab-b701-9361e8b84557 bound to our chassis#033[00m Nov 23 05:07:32 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:32.409 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port 39c4e0fa-810d-4447-8bcd-077623cac60c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 05:07:32 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:32.409 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de7ea64f-b6be-4aab-b701-9361e8b84557, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:07:32 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:32.411 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[7e480d14-424a-4cf3-8ac1-feaa53ba3664]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:07:32 localhost journal[229336]: ethtool ioctl error on tap68ca76ce-ff: No such device Nov 23 05:07:32 localhost ovn_controller[153771]: 2025-11-23T10:07:32Z|00357|binding|INFO|Setting lport 68ca76ce-ff51-47e8-a962-3d13341d1c99 ovn-installed in OVS Nov 23 05:07:32 localhost ovn_controller[153771]: 2025-11-23T10:07:32Z|00358|binding|INFO|Setting lport 68ca76ce-ff51-47e8-a962-3d13341d1c99 up in Southbound Nov 23 05:07:32 localhost nova_compute[280939]: 2025-11-23 10:07:32.439 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:32 localhost nova_compute[280939]: 2025-11-23 10:07:32.441 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:32 localhost journal[229336]: ethtool ioctl error on tap68ca76ce-ff: No such device Nov 23 05:07:32 localhost journal[229336]: ethtool ioctl error on tap68ca76ce-ff: No such device Nov 23 05:07:32 localhost journal[229336]: ethtool ioctl error on tap68ca76ce-ff: No such device Nov 23 05:07:32 localhost journal[229336]: ethtool ioctl error on tap68ca76ce-ff: No such device Nov 23 05:07:32 localhost journal[229336]: ethtool ioctl error on tap68ca76ce-ff: No such device Nov 23 05:07:32 localhost journal[229336]: ethtool ioctl error on tap68ca76ce-ff: No such device Nov 23 05:07:32 localhost nova_compute[280939]: 2025-11-23 10:07:32.471 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:32 localhost journal[229336]: ethtool ioctl error on tap68ca76ce-ff: No such device Nov 23 05:07:32 localhost nova_compute[280939]: 2025-11-23 10:07:32.500 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:07:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e245 do_prune osdmap full prune enabled Nov 23 05:07:33 localhost systemd-journald[47422]: Data hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 75.0 (53723 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation. Nov 23 05:07:33 localhost systemd-journald[47422]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 23 05:07:33 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 05:07:33 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:07:33 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6, vol_name:cephfs) < "" Nov 23 05:07:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e246 e246: 6 total, 6 up, 6 in Nov 23 05:07:33 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e246: 6 total, 6 up, 6 in Nov 23 05:07:33 localhost podman[325707]: Nov 23 05:07:33 localhost podman[325707]: 2025-11-23 10:07:33.326551859 +0000 UTC m=+0.092747108 container create 8551df7d86ed08ddbea343c645e6c4a5185bab74cce72a067417f8b167db7f5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de7ea64f-b6be-4aab-b701-9361e8b84557, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Nov 23 05:07:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:07:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:07:33 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 23 05:07:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:07:33 localhost systemd[1]: Started libpod-conmon-8551df7d86ed08ddbea343c645e6c4a5185bab74cce72a067417f8b167db7f5e.scope. Nov 23 05:07:33 localhost podman[325707]: 2025-11-23 10:07:33.284864215 +0000 UTC m=+0.051059484 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:07:33 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6/.meta.tmp' Nov 23 05:07:33 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6/.meta.tmp' to config b'/volumes/_nogroup/8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6/.meta' Nov 23 05:07:33 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6, vol_name:cephfs) < "" Nov 23 05:07:33 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6", "format": "json"}]: dispatch Nov 23 05:07:33 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6, vol_name:cephfs) < "" Nov 23 05:07:33 localhost systemd[1]: Started libcrun container. Nov 23 05:07:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f6bb4d34b376f40e27177a827c1a858138f93f43f3777bc464fa59d8721fdf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:07:33 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6, vol_name:cephfs) < "" Nov 23 05:07:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:07:33 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:07:33 localhost podman[325707]: 2025-11-23 10:07:33.408841224 +0000 UTC m=+0.175036463 container init 8551df7d86ed08ddbea343c645e6c4a5185bab74cce72a067417f8b167db7f5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de7ea64f-b6be-4aab-b701-9361e8b84557, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3) Nov 23 05:07:33 localhost podman[325707]: 2025-11-23 10:07:33.417537192 +0000 UTC m=+0.183732431 container start 8551df7d86ed08ddbea343c645e6c4a5185bab74cce72a067417f8b167db7f5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de7ea64f-b6be-4aab-b701-9361e8b84557, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 23 05:07:33 localhost dnsmasq[325752]: started, version 2.85 cachesize 150 Nov 23 05:07:33 localhost dnsmasq[325752]: DNS service limited to local subnets Nov 23 05:07:33 localhost dnsmasq[325752]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:07:33 localhost dnsmasq[325752]: warning: no upstream servers configured Nov 23 05:07:33 localhost dnsmasq-dhcp[325752]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:07:33 localhost dnsmasq[325752]: read /var/lib/neutron/dhcp/de7ea64f-b6be-4aab-b701-9361e8b84557/addn_hosts - 0 addresses Nov 23 05:07:33 localhost dnsmasq-dhcp[325752]: read /var/lib/neutron/dhcp/de7ea64f-b6be-4aab-b701-9361e8b84557/host Nov 23 05:07:33 localhost dnsmasq-dhcp[325752]: read /var/lib/neutron/dhcp/de7ea64f-b6be-4aab-b701-9361e8b84557/opts Nov 23 05:07:33 localhost podman[325720]: 2025-11-23 10:07:33.460094593 +0000 UTC m=+0.090736976 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Nov 23 05:07:33 localhost podman[325723]: 2025-11-23 10:07:33.494325017 +0000 UTC m=+0.121966807 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2) Nov 23 05:07:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v527: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 15 KiB/s rd, 36 KiB/s wr, 25 op/s Nov 23 05:07:33 localhost podman[325720]: 2025-11-23 10:07:33.553946843 +0000 UTC m=+0.184589236 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3) Nov 23 05:07:33 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:07:33 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:07:33.586 262301 INFO neutron.agent.dhcp.agent [None req-708afb71-e5b4-4957-b182-48993366ce84 - - - - - -] DHCP configuration for ports {'8a60c703-f92d-4465-af87-f6b11f920ec3'} is completed#033[00m Nov 23 05:07:33 localhost podman[325723]: 2025-11-23 10:07:33.596452514 +0000 UTC m=+0.224094334 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118) Nov 23 05:07:33 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:07:33 localhost podman[325722]: 2025-11-23 10:07:33.560616099 +0000 UTC m=+0.191430487 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 05:07:33 localhost podman[325722]: 2025-11-23 10:07:33.643502192 +0000 UTC m=+0.274316530 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 05:07:33 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:07:33 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:33 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:07:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 23 05:07:33 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:07:33 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:07:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:07:33 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:33 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:33 localhost nova_compute[280939]: 2025-11-23 10:07:33.748 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:33 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:34 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:07:34 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:34 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e246 do_prune osdmap full prune enabled Nov 23 05:07:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e247 e247: 6 total, 6 up, 6 in Nov 23 05:07:34 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e247: 6 total, 6 up, 6 in Nov 23 05:07:35 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9757f6f1-7e41-4cb6-a23b-fdd5de1e9889", "format": "json"}]: dispatch Nov 23 05:07:35 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9757f6f1-7e41-4cb6-a23b-fdd5de1e9889, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:07:35 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9757f6f1-7e41-4cb6-a23b-fdd5de1e9889, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:07:35 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:07:35.073+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9757f6f1-7e41-4cb6-a23b-fdd5de1e9889' of type subvolume Nov 23 05:07:35 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9757f6f1-7e41-4cb6-a23b-fdd5de1e9889' of type subvolume Nov 23 05:07:35 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9757f6f1-7e41-4cb6-a23b-fdd5de1e9889", "force": true, "format": "json"}]: dispatch Nov 23 05:07:35 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9757f6f1-7e41-4cb6-a23b-fdd5de1e9889, vol_name:cephfs) < "" Nov 23 05:07:35 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9757f6f1-7e41-4cb6-a23b-fdd5de1e9889'' moved to trashcan Nov 23 05:07:35 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:07:35 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9757f6f1-7e41-4cb6-a23b-fdd5de1e9889, vol_name:cephfs) < "" Nov 23 05:07:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v529: 177 pgs: 177 active+clean; 221 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 60 KiB/s rd, 2.9 MiB/s wr, 97 op/s Nov 23 05:07:35 localhost dnsmasq[325752]: exiting on receipt of SIGTERM Nov 23 05:07:35 localhost podman[325807]: 2025-11-23 10:07:35.754967013 +0000 UTC m=+0.067490330 container kill 8551df7d86ed08ddbea343c645e6c4a5185bab74cce72a067417f8b167db7f5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de7ea64f-b6be-4aab-b701-9361e8b84557, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 05:07:35 localhost systemd[1]: libpod-8551df7d86ed08ddbea343c645e6c4a5185bab74cce72a067417f8b167db7f5e.scope: Deactivated successfully. Nov 23 05:07:35 localhost ovn_controller[153771]: 2025-11-23T10:07:35Z|00359|binding|INFO|Removing iface tap68ca76ce-ff ovn-installed in OVS Nov 23 05:07:35 localhost ovn_controller[153771]: 2025-11-23T10:07:35Z|00360|binding|INFO|Removing lport 68ca76ce-ff51-47e8-a962-3d13341d1c99 ovn-installed in OVS Nov 23 05:07:35 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:35.808 159415 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 39c4e0fa-810d-4447-8bcd-077623cac60c with type ""#033[00m Nov 23 05:07:35 localhost nova_compute[280939]: 2025-11-23 10:07:35.809 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:35 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:35.810 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/16', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-de7ea64f-b6be-4aab-b701-9361e8b84557', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de7ea64f-b6be-4aab-b701-9361e8b84557', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f48fa865c4047a080902678e51be06e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c292f20-3952-4b3e-9deb-abb97dabd42f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=68ca76ce-ff51-47e8-a962-3d13341d1c99) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:07:35 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:35.812 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 68ca76ce-ff51-47e8-a962-3d13341d1c99 in datapath de7ea64f-b6be-4aab-b701-9361e8b84557 unbound from our chassis#033[00m Nov 23 05:07:35 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:35.815 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de7ea64f-b6be-4aab-b701-9361e8b84557, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:07:35 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:35.817 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[8c7daa96-238f-4b60-b6b6-031cc4db5929]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:07:35 localhost nova_compute[280939]: 2025-11-23 10:07:35.818 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:35 localhost podman[325820]: 2025-11-23 10:07:35.843103518 +0000 UTC m=+0.070275396 container died 8551df7d86ed08ddbea343c645e6c4a5185bab74cce72a067417f8b167db7f5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de7ea64f-b6be-4aab-b701-9361e8b84557, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 23 05:07:35 localhost systemd[1]: tmp-crun.pTvc9g.mount: Deactivated successfully. Nov 23 05:07:35 localhost podman[325820]: 2025-11-23 10:07:35.879369025 +0000 UTC m=+0.106540863 container cleanup 8551df7d86ed08ddbea343c645e6c4a5185bab74cce72a067417f8b167db7f5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de7ea64f-b6be-4aab-b701-9361e8b84557, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 05:07:35 localhost systemd[1]: libpod-conmon-8551df7d86ed08ddbea343c645e6c4a5185bab74cce72a067417f8b167db7f5e.scope: Deactivated successfully. Nov 23 05:07:35 localhost podman[325821]: 2025-11-23 10:07:35.957788761 +0000 UTC m=+0.181775351 container remove 8551df7d86ed08ddbea343c645e6c4a5185bab74cce72a067417f8b167db7f5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de7ea64f-b6be-4aab-b701-9361e8b84557, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Nov 23 05:07:35 localhost nova_compute[280939]: 2025-11-23 10:07:35.970 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:35 localhost kernel: device tap68ca76ce-ff left promiscuous mode Nov 23 05:07:35 localhost nova_compute[280939]: 2025-11-23 10:07:35.989 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:36 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:07:36.003 262301 INFO neutron.agent.dhcp.agent [None req-bbb86718-7484-462f-9b55-744325e25b4e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:07:36 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:07:36.027 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:07:36 localhost nova_compute[280939]: 2025-11-23 10:07:36.310 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:36 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6", "format": "json"}]: dispatch Nov 23 05:07:36 localhost openstack_network_exporter[241732]: ERROR 10:07:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:07:36 localhost openstack_network_exporter[241732]: ERROR 10:07:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:07:36 localhost openstack_network_exporter[241732]: ERROR 10:07:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:07:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:07:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:07:36 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:07:36.721+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6' of type subvolume Nov 23 05:07:36 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6' of type subvolume Nov 23 05:07:36 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6", "force": true, "format": "json"}]: dispatch Nov 23 05:07:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6, vol_name:cephfs) < "" Nov 23 05:07:36 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6'' moved to trashcan Nov 23 05:07:36 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:07:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8dd48fc8-4cc9-428a-b704-1fe8f3b6e8e6, vol_name:cephfs) < "" Nov 23 05:07:36 localhost openstack_network_exporter[241732]: ERROR 10:07:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:07:36 localhost openstack_network_exporter[241732]: Nov 23 05:07:36 localhost openstack_network_exporter[241732]: ERROR 10:07:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:07:36 localhost openstack_network_exporter[241732]: Nov 23 05:07:36 localhost systemd[1]: var-lib-containers-storage-overlay-37f6bb4d34b376f40e27177a827c1a858138f93f43f3777bc464fa59d8721fdf-merged.mount: Deactivated successfully. Nov 23 05:07:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8551df7d86ed08ddbea343c645e6c4a5185bab74cce72a067417f8b167db7f5e-userdata-shm.mount: Deactivated successfully. Nov 23 05:07:36 localhost systemd[1]: run-netns-qdhcp\x2dde7ea64f\x2db6be\x2d4aab\x2db701\x2d9361e8b84557.mount: Deactivated successfully. Nov 23 05:07:36 localhost ovn_controller[153771]: 2025-11-23T10:07:36Z|00361|ovn_bfd|INFO|Disabled BFD on interface ovn-c237ed-0 Nov 23 05:07:36 localhost ovn_controller[153771]: 2025-11-23T10:07:36Z|00362|ovn_bfd|INFO|Disabled BFD on interface ovn-49b8a0-0 Nov 23 05:07:36 localhost ovn_controller[153771]: 2025-11-23T10:07:36Z|00363|ovn_bfd|INFO|Disabled BFD on interface ovn-b7d5b3-0 Nov 23 05:07:36 localhost nova_compute[280939]: 2025-11-23 10:07:36.850 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:36 localhost nova_compute[280939]: 2025-11-23 10:07:36.853 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:36 localhost nova_compute[280939]: 2025-11-23 10:07:36.871 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:37 localhost podman[325863]: 2025-11-23 10:07:37.003697898 +0000 UTC m=+0.059109541 container kill f968c6075d0b38e72781ccba827757cf2a117918f8d219adf1daf1111bd53124 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e78ff008-0a31-4b50-b91d-159757c7a740, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 23 05:07:37 localhost dnsmasq[325483]: read /var/lib/neutron/dhcp/e78ff008-0a31-4b50-b91d-159757c7a740/addn_hosts - 0 addresses Nov 23 05:07:37 localhost dnsmasq-dhcp[325483]: read /var/lib/neutron/dhcp/e78ff008-0a31-4b50-b91d-159757c7a740/host Nov 23 05:07:37 localhost dnsmasq-dhcp[325483]: read /var/lib/neutron/dhcp/e78ff008-0a31-4b50-b91d-159757c7a740/opts Nov 23 05:07:37 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "format": "json"}]: dispatch Nov 23 05:07:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:37 localhost nova_compute[280939]: 2025-11-23 10:07:37.062 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 23 05:07:37 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:07:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Nov 23 05:07:37 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 23 05:07:37 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 23 05:07:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:37 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "format": "json"}]: dispatch Nov 23 05:07:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:37 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:07:37 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:07:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:37 localhost ovn_controller[153771]: 2025-11-23T10:07:37Z|00364|binding|INFO|Releasing lport 50620687-1913-45f3-91da-4f90a2f74130 from this chassis (sb_readonly=0) Nov 23 05:07:37 localhost nova_compute[280939]: 2025-11-23 10:07:37.192 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:37 localhost ovn_controller[153771]: 2025-11-23T10:07:37Z|00365|binding|INFO|Setting lport 50620687-1913-45f3-91da-4f90a2f74130 down in Southbound Nov 23 05:07:37 localhost kernel: device tap50620687-19 left promiscuous mode Nov 23 05:07:37 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:37.204 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-e78ff008-0a31-4b50-b91d-159757c7a740', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e78ff008-0a31-4b50-b91d-159757c7a740', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '353e3a021df940c0838bf3dc0b3f265a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62148aa2-644a-479b-9738-8e67e725dbbb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=50620687-1913-45f3-91da-4f90a2f74130) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:07:37 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:37.206 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 50620687-1913-45f3-91da-4f90a2f74130 in datapath e78ff008-0a31-4b50-b91d-159757c7a740 unbound from our chassis#033[00m Nov 23 05:07:37 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:37.208 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e78ff008-0a31-4b50-b91d-159757c7a740, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:07:37 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:37.209 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[8308f324-4a65-41bc-a7fd-21616ddd78c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:07:37 localhost nova_compute[280939]: 2025-11-23 10:07:37.223 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:37 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:07:37 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 23 05:07:37 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 23 05:07:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v530: 177 pgs: 177 active+clean; 221 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 2.6 MiB/s wr, 63 op/s Nov 23 05:07:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:07:38 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dd86bcfb-cedc-449f-b13c-69c8336baf06", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:07:38 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dd86bcfb-cedc-449f-b13c-69c8336baf06, vol_name:cephfs) < "" Nov 23 05:07:38 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dd86bcfb-cedc-449f-b13c-69c8336baf06/.meta.tmp' Nov 23 05:07:38 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dd86bcfb-cedc-449f-b13c-69c8336baf06/.meta.tmp' to config b'/volumes/_nogroup/dd86bcfb-cedc-449f-b13c-69c8336baf06/.meta' Nov 23 05:07:38 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dd86bcfb-cedc-449f-b13c-69c8336baf06, vol_name:cephfs) < "" Nov 23 05:07:38 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dd86bcfb-cedc-449f-b13c-69c8336baf06", "format": "json"}]: dispatch Nov 23 05:07:38 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dd86bcfb-cedc-449f-b13c-69c8336baf06, vol_name:cephfs) < "" Nov 23 05:07:38 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dd86bcfb-cedc-449f-b13c-69c8336baf06, vol_name:cephfs) < "" Nov 23 05:07:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:07:38 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:07:38 localhost nova_compute[280939]: 2025-11-23 10:07:38.751 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v531: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 86 KiB/s rd, 2.7 MiB/s wr, 131 op/s Nov 23 05:07:40 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "r", "format": "json"}]: dispatch Nov 23 05:07:40 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 23 05:07:40 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:07:40 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:07:40 localhost dnsmasq[325483]: exiting on receipt of SIGTERM Nov 23 05:07:40 localhost systemd[1]: libpod-f968c6075d0b38e72781ccba827757cf2a117918f8d219adf1daf1111bd53124.scope: Deactivated successfully. Nov 23 05:07:40 localhost podman[325903]: 2025-11-23 10:07:40.31405845 +0000 UTC m=+0.063715124 container kill f968c6075d0b38e72781ccba827757cf2a117918f8d219adf1daf1111bd53124 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e78ff008-0a31-4b50-b91d-159757c7a740, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118) Nov 23 05:07:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:07:40 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:07:40 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:40 localhost podman[325916]: 2025-11-23 10:07:40.383781127 +0000 UTC m=+0.059741721 container died f968c6075d0b38e72781ccba827757cf2a117918f8d219adf1daf1111bd53124 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e78ff008-0a31-4b50-b91d-159757c7a740, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:07:40 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:40 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f968c6075d0b38e72781ccba827757cf2a117918f8d219adf1daf1111bd53124-userdata-shm.mount: Deactivated successfully. Nov 23 05:07:40 localhost podman[325916]: 2025-11-23 10:07:40.4140623 +0000 UTC m=+0.090022854 container cleanup f968c6075d0b38e72781ccba827757cf2a117918f8d219adf1daf1111bd53124 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e78ff008-0a31-4b50-b91d-159757c7a740, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:07:40 localhost systemd[1]: libpod-conmon-f968c6075d0b38e72781ccba827757cf2a117918f8d219adf1daf1111bd53124.scope: Deactivated successfully. Nov 23 05:07:40 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:07:40 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:40 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:40 localhost podman[325924]: 2025-11-23 10:07:40.49424752 +0000 UTC m=+0.150620051 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, release=1755695350, config_id=edpm, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Nov 23 05:07:40 localhost podman[325924]: 2025-11-23 10:07:40.537475261 +0000 UTC m=+0.193847772 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, architecture=x86_64, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9) Nov 23 05:07:40 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:07:40 localhost podman[325923]: 2025-11-23 10:07:40.592467356 +0000 UTC m=+0.248490286 container remove f968c6075d0b38e72781ccba827757cf2a117918f8d219adf1daf1111bd53124 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e78ff008-0a31-4b50-b91d-159757c7a740, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 05:07:40 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:07:40.620 262301 INFO neutron.agent.dhcp.agent [None req-de18544a-1f6d-4f2e-ba05-c527d874e47e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:07:40 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:07:40.621 262301 INFO neutron.agent.dhcp.agent [None req-de18544a-1f6d-4f2e-ba05-c527d874e47e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:07:40 localhost nova_compute[280939]: 2025-11-23 10:07:40.812 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:41 localhost systemd[1]: var-lib-containers-storage-overlay-9c9bbead5474818129c24c620286f28511831bf83f9960d830417f795c5bea4a-merged.mount: Deactivated successfully. Nov 23 05:07:41 localhost systemd[1]: run-netns-qdhcp\x2de78ff008\x2d0a31\x2d4b50\x2db91d\x2d159757c7a740.mount: Deactivated successfully. Nov 23 05:07:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v532: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 2.6 MiB/s wr, 127 op/s Nov 23 05:07:41 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dd86bcfb-cedc-449f-b13c-69c8336baf06", "format": "json"}]: dispatch Nov 23 05:07:41 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dd86bcfb-cedc-449f-b13c-69c8336baf06, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:07:41 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dd86bcfb-cedc-449f-b13c-69c8336baf06, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:07:41 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:07:41.763+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dd86bcfb-cedc-449f-b13c-69c8336baf06' of type subvolume Nov 23 05:07:41 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dd86bcfb-cedc-449f-b13c-69c8336baf06' of type subvolume Nov 23 05:07:41 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dd86bcfb-cedc-449f-b13c-69c8336baf06", "force": true, "format": "json"}]: dispatch Nov 23 05:07:41 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dd86bcfb-cedc-449f-b13c-69c8336baf06, vol_name:cephfs) < "" Nov 23 05:07:41 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dd86bcfb-cedc-449f-b13c-69c8336baf06'' moved to trashcan Nov 23 05:07:41 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:07:41 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dd86bcfb-cedc-449f-b13c-69c8336baf06, vol_name:cephfs) < "" Nov 23 05:07:42 localhost ovn_metadata_agent[159410]: 2025-11-23 10:07:42.031 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 05:07:42 localhost nova_compute[280939]: 2025-11-23 10:07:42.085 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:07:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e247 do_prune osdmap full prune enabled Nov 23 05:07:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e248 e248: 6 total, 6 up, 6 in Nov 23 05:07:43 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e248: 6 total, 6 up, 6 in Nov 23 05:07:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v534: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 55 KiB/s wr, 61 op/s Nov 23 05:07:43 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "format": "json"}]: dispatch Nov 23 05:07:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 23 05:07:43 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:07:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Nov 23 05:07:43 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 23 05:07:43 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 23 05:07:43 localhost nova_compute[280939]: 2025-11-23 10:07:43.793 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:43 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "format": "json"}]: dispatch Nov 23 05:07:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:43 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:07:43 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:07:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:43 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bcb84003-ea75-4d99-8d94-e5e6e07ace02", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:07:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:07:43 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bcb84003-ea75-4d99-8d94-e5e6e07ace02/.meta.tmp' Nov 23 05:07:43 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bcb84003-ea75-4d99-8d94-e5e6e07ace02/.meta.tmp' to config b'/volumes/_nogroup/bcb84003-ea75-4d99-8d94-e5e6e07ace02/.meta' Nov 23 05:07:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:07:43 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bcb84003-ea75-4d99-8d94-e5e6e07ace02", "format": "json"}]: dispatch Nov 23 05:07:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:07:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:07:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:07:43 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:07:44 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0e32d4cd-9733-47da-92f6-d332e549133b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:07:44 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0e32d4cd-9733-47da-92f6-d332e549133b, vol_name:cephfs) < "" Nov 23 05:07:44 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0e32d4cd-9733-47da-92f6-d332e549133b/.meta.tmp' Nov 23 05:07:44 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0e32d4cd-9733-47da-92f6-d332e549133b/.meta.tmp' to config b'/volumes/_nogroup/0e32d4cd-9733-47da-92f6-d332e549133b/.meta' Nov 23 05:07:44 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0e32d4cd-9733-47da-92f6-d332e549133b, vol_name:cephfs) < "" Nov 23 05:07:44 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0e32d4cd-9733-47da-92f6-d332e549133b", "format": "json"}]: dispatch Nov 23 05:07:44 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0e32d4cd-9733-47da-92f6-d332e549133b, vol_name:cephfs) < "" Nov 23 05:07:44 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0e32d4cd-9733-47da-92f6-d332e549133b, vol_name:cephfs) < "" Nov 23 05:07:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:07:44 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:07:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e248 do_prune osdmap full prune enabled Nov 23 05:07:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e249 e249: 6 total, 6 up, 6 in Nov 23 05:07:44 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e249: 6 total, 6 up, 6 in Nov 23 05:07:44 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:07:44 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 23 05:07:44 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 23 05:07:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v536: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 138 KiB/s wr, 87 op/s Nov 23 05:07:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:07:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:07:45 localhost systemd[1]: tmp-crun.KMUG6O.mount: Deactivated successfully. Nov 23 05:07:45 localhost podman[325965]: 2025-11-23 10:07:45.896059936 +0000 UTC m=+0.084156703 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute) Nov 23 05:07:45 localhost podman[325965]: 2025-11-23 10:07:45.909459879 +0000 UTC m=+0.097556636 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:07:45 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:07:46 localhost systemd[1]: tmp-crun.upUSuF.mount: Deactivated successfully. Nov 23 05:07:46 localhost podman[325966]: 2025-11-23 10:07:46.002363321 +0000 UTC m=+0.187402864 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:07:46 localhost podman[325966]: 2025-11-23 10:07:46.01434871 +0000 UTC m=+0.199388243 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:07:46 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:07:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e249 do_prune osdmap full prune enabled Nov 23 05:07:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e250 e250: 6 total, 6 up, 6 in Nov 23 05:07:46 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e250: 6 total, 6 up, 6 in Nov 23 05:07:46 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:07:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 23 05:07:46 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:07:46 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice_bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:07:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:07:46 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:47 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:47 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:47 localhost podman[239764]: time="2025-11-23T10:07:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:07:47 localhost podman[239764]: @ - - [23/Nov/2025:10:07:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:07:47 localhost nova_compute[280939]: 2025-11-23 10:07:47.124 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:47 localhost podman[239764]: @ - - [23/Nov/2025:10:07:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18760 "" "Go-http-client/1.1" Nov 23 05:07:47 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "bcb84003-ea75-4d99-8d94-e5e6e07ace02", "snap_name": "faa5e687-7f2b-4bd4-b60d-23c771ca94bd", "format": "json"}]: dispatch Nov 23 05:07:47 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:faa5e687-7f2b-4bd4-b60d-23c771ca94bd, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:07:47 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:faa5e687-7f2b-4bd4-b60d-23c771ca94bd, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:07:47 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0e32d4cd-9733-47da-92f6-d332e549133b", "snap_name": "87f14237-1545-404e-9124-daefa8c44022", "format": "json"}]: dispatch Nov 23 05:07:47 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:87f14237-1545-404e-9124-daefa8c44022, sub_name:0e32d4cd-9733-47da-92f6-d332e549133b, vol_name:cephfs) < "" Nov 23 05:07:47 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:87f14237-1545-404e-9124-daefa8c44022, sub_name:0e32d4cd-9733-47da-92f6-d332e549133b, vol_name:cephfs) < "" Nov 23 05:07:47 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:07:47 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:47 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v538: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 104 KiB/s wr, 26 op/s Nov 23 05:07:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 05:07:48 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 05:07:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 05:07:48 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:07:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 05:07:48 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:07:48 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev 6e550e2b-6a54-4bd2-9456-7e355f0f56a0 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:07:48 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev 6e550e2b-6a54-4bd2-9456-7e355f0f56a0 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:07:48 localhost ceph-mgr[286671]: [progress INFO root] Completed event 6e550e2b-6a54-4bd2-9456-7e355f0f56a0 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 05:07:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 05:07:48 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 05:07:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:07:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 23 05:07:48 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/41927298' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 23 05:07:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 23 05:07:48 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/41927298' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 23 05:07:48 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:07:48 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:07:48 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 05:07:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 05:07:48 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:07:48 localhost nova_compute[280939]: 2025-11-23 10:07:48.839 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v539: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 154 KiB/s wr, 74 op/s Nov 23 05:07:49 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:07:50 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 23 05:07:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:50 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 23 05:07:50 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:07:50 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Nov 23 05:07:50 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 23 05:07:50 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 23 05:07:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:50 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 23 05:07:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:50 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:07:50 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:07:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:50 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:07:50 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 23 05:07:50 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 23 05:07:50 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "bcb84003-ea75-4d99-8d94-e5e6e07ace02", "snap_name": "1d874a0b-7765-41f2-a385-f7185c843be2", "format": "json"}]: dispatch Nov 23 05:07:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1d874a0b-7765-41f2-a385-f7185c843be2, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:07:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1d874a0b-7765-41f2-a385-f7185c843be2, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:07:51 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0e32d4cd-9733-47da-92f6-d332e549133b", "snap_name": "87f14237-1545-404e-9124-daefa8c44022_0bb34153-e629-41c3-8e6d-cc12429b03ff", "force": true, "format": "json"}]: dispatch Nov 23 05:07:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:87f14237-1545-404e-9124-daefa8c44022_0bb34153-e629-41c3-8e6d-cc12429b03ff, sub_name:0e32d4cd-9733-47da-92f6-d332e549133b, vol_name:cephfs) < "" Nov 23 05:07:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0e32d4cd-9733-47da-92f6-d332e549133b/.meta.tmp' Nov 23 05:07:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0e32d4cd-9733-47da-92f6-d332e549133b/.meta.tmp' to config b'/volumes/_nogroup/0e32d4cd-9733-47da-92f6-d332e549133b/.meta' Nov 23 05:07:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:87f14237-1545-404e-9124-daefa8c44022_0bb34153-e629-41c3-8e6d-cc12429b03ff, sub_name:0e32d4cd-9733-47da-92f6-d332e549133b, vol_name:cephfs) < "" Nov 23 05:07:51 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0e32d4cd-9733-47da-92f6-d332e549133b", "snap_name": "87f14237-1545-404e-9124-daefa8c44022", "force": true, "format": "json"}]: dispatch Nov 23 05:07:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:87f14237-1545-404e-9124-daefa8c44022, sub_name:0e32d4cd-9733-47da-92f6-d332e549133b, vol_name:cephfs) < "" Nov 23 05:07:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0e32d4cd-9733-47da-92f6-d332e549133b/.meta.tmp' Nov 23 05:07:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0e32d4cd-9733-47da-92f6-d332e549133b/.meta.tmp' to config b'/volumes/_nogroup/0e32d4cd-9733-47da-92f6-d332e549133b/.meta' Nov 23 05:07:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:87f14237-1545-404e-9124-daefa8c44022, sub_name:0e32d4cd-9733-47da-92f6-d332e549133b, vol_name:cephfs) < "" Nov 23 05:07:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v540: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 122 KiB/s wr, 58 op/s Nov 23 05:07:52 localhost nova_compute[280939]: 2025-11-23 10:07:52.150 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:07:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e250 do_prune osdmap full prune enabled Nov 23 05:07:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e251 e251: 6 total, 6 up, 6 in Nov 23 05:07:53 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e251: 6 total, 6 up, 6 in Nov 23 05:07:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:07:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:07:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:07:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:07:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:07:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:07:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v542: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 44 KiB/s wr, 38 op/s Nov 23 05:07:53 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "r", "format": "json"}]: dispatch Nov 23 05:07:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 23 05:07:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:07:53 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice_bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:07:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:07:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:07:53 localhost nova_compute[280939]: 2025-11-23 10:07:53.873 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:54 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bcb84003-ea75-4d99-8d94-e5e6e07ace02", "snap_name": "1d874a0b-7765-41f2-a385-f7185c843be2_ee5d1a85-59c9-41ad-a62d-2d9f920dd3f4", "force": true, "format": "json"}]: dispatch Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1d874a0b-7765-41f2-a385-f7185c843be2_ee5d1a85-59c9-41ad-a62d-2d9f920dd3f4, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bcb84003-ea75-4d99-8d94-e5e6e07ace02/.meta.tmp' Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bcb84003-ea75-4d99-8d94-e5e6e07ace02/.meta.tmp' to config b'/volumes/_nogroup/bcb84003-ea75-4d99-8d94-e5e6e07ace02/.meta' Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1d874a0b-7765-41f2-a385-f7185c843be2_ee5d1a85-59c9-41ad-a62d-2d9f920dd3f4, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:07:54 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bcb84003-ea75-4d99-8d94-e5e6e07ace02", "snap_name": "1d874a0b-7765-41f2-a385-f7185c843be2", "force": true, "format": "json"}]: dispatch Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1d874a0b-7765-41f2-a385-f7185c843be2, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bcb84003-ea75-4d99-8d94-e5e6e07ace02/.meta.tmp' Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bcb84003-ea75-4d99-8d94-e5e6e07ace02/.meta.tmp' to config b'/volumes/_nogroup/bcb84003-ea75-4d99-8d94-e5e6e07ace02/.meta' Nov 23 05:07:54 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:07:54 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:07:54 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1d874a0b-7765-41f2-a385-f7185c843be2, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:07:54 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4206c3a8-effc-409e-87ea-4b9d53523041", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4206c3a8-effc-409e-87ea-4b9d53523041, vol_name:cephfs) < "" Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4206c3a8-effc-409e-87ea-4b9d53523041/.meta.tmp' Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4206c3a8-effc-409e-87ea-4b9d53523041/.meta.tmp' to config b'/volumes/_nogroup/4206c3a8-effc-409e-87ea-4b9d53523041/.meta' Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4206c3a8-effc-409e-87ea-4b9d53523041, vol_name:cephfs) < "" Nov 23 05:07:54 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4206c3a8-effc-409e-87ea-4b9d53523041", "format": "json"}]: dispatch Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4206c3a8-effc-409e-87ea-4b9d53523041, vol_name:cephfs) < "" Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4206c3a8-effc-409e-87ea-4b9d53523041, vol_name:cephfs) < "" Nov 23 05:07:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:07:54 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:07:54 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0e32d4cd-9733-47da-92f6-d332e549133b", "format": "json"}]: dispatch Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0e32d4cd-9733-47da-92f6-d332e549133b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0e32d4cd-9733-47da-92f6-d332e549133b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:07:54 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:07:54.511+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0e32d4cd-9733-47da-92f6-d332e549133b' of type subvolume Nov 23 05:07:54 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0e32d4cd-9733-47da-92f6-d332e549133b' of type subvolume Nov 23 05:07:54 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0e32d4cd-9733-47da-92f6-d332e549133b", "force": true, "format": "json"}]: dispatch Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0e32d4cd-9733-47da-92f6-d332e549133b, vol_name:cephfs) < "" Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0e32d4cd-9733-47da-92f6-d332e549133b'' moved to trashcan Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:07:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0e32d4cd-9733-47da-92f6-d332e549133b, vol_name:cephfs) < "" Nov 23 05:07:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v543: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 107 KiB/s wr, 42 op/s Nov 23 05:07:57 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 23 05:07:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:57 localhost nova_compute[280939]: 2025-11-23 10:07:57.186 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 23 05:07:57 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:07:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Nov 23 05:07:57 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 23 05:07:57 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 23 05:07:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:57 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 23 05:07:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:07:57 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:07:57 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 23 05:07:57 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 23 05:07:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:07:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:07:57 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bcb84003-ea75-4d99-8d94-e5e6e07ace02", "snap_name": "faa5e687-7f2b-4bd4-b60d-23c771ca94bd_6277c15f-b0cf-47a5-ab1e-cf7a29aa510b", "force": true, "format": "json"}]: dispatch Nov 23 05:07:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:faa5e687-7f2b-4bd4-b60d-23c771ca94bd_6277c15f-b0cf-47a5-ab1e-cf7a29aa510b, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:07:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bcb84003-ea75-4d99-8d94-e5e6e07ace02/.meta.tmp' Nov 23 05:07:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bcb84003-ea75-4d99-8d94-e5e6e07ace02/.meta.tmp' to config b'/volumes/_nogroup/bcb84003-ea75-4d99-8d94-e5e6e07ace02/.meta' Nov 23 05:07:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:faa5e687-7f2b-4bd4-b60d-23c771ca94bd_6277c15f-b0cf-47a5-ab1e-cf7a29aa510b, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:07:57 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bcb84003-ea75-4d99-8d94-e5e6e07ace02", "snap_name": "faa5e687-7f2b-4bd4-b60d-23c771ca94bd", "force": true, "format": "json"}]: dispatch Nov 23 05:07:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:faa5e687-7f2b-4bd4-b60d-23c771ca94bd, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:07:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bcb84003-ea75-4d99-8d94-e5e6e07ace02/.meta.tmp' Nov 23 05:07:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bcb84003-ea75-4d99-8d94-e5e6e07ace02/.meta.tmp' to config b'/volumes/_nogroup/bcb84003-ea75-4d99-8d94-e5e6e07ace02/.meta' Nov 23 05:07:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:faa5e687-7f2b-4bd4-b60d-23c771ca94bd, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:07:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v544: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 96 KiB/s wr, 37 op/s Nov 23 05:07:57 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4206c3a8-effc-409e-87ea-4b9d53523041", "snap_name": "b0e1eebd-2fc1-48d8-874f-e40a8f23a682", "format": "json"}]: dispatch Nov 23 05:07:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b0e1eebd-2fc1-48d8-874f-e40a8f23a682, sub_name:4206c3a8-effc-409e-87ea-4b9d53523041, vol_name:cephfs) < "" Nov 23 05:07:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b0e1eebd-2fc1-48d8-874f-e40a8f23a682, sub_name:4206c3a8-effc-409e-87ea-4b9d53523041, vol_name:cephfs) < "" Nov 23 05:07:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:07:57 localhost podman[326094]: 2025-11-23 10:07:57.907407859 +0000 UTC m=+0.090004433 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 23 05:07:57 localhost podman[326094]: 2025-11-23 10:07:57.941473679 +0000 UTC m=+0.124070233 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:07:57 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:07:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:07:58 localhost nova_compute[280939]: 2025-11-23 10:07:58.899 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:07:59 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e251 do_prune osdmap full prune enabled Nov 23 05:07:59 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e252 e252: 6 total, 6 up, 6 in Nov 23 05:07:59 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e252: 6 total, 6 up, 6 in Nov 23 05:07:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v546: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 143 KiB/s wr, 14 op/s Nov 23 05:08:00 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:08:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 23 05:08:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:08:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:08:00 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0. Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:08:00.368854) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64 Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892480368892, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1271, "num_deletes": 254, "total_data_size": 1290353, "memory_usage": 1314368, "flush_reason": "Manual Compaction"} Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892480379450, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 1268743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35067, "largest_seqno": 36337, "table_properties": {"data_size": 1262817, "index_size": 3076, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 15367, "raw_average_key_size": 21, "raw_value_size": 1249959, "raw_average_value_size": 1775, "num_data_blocks": 133, "num_entries": 704, "num_filter_entries": 704, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892428, "oldest_key_time": 1763892428, "file_creation_time": 1763892480, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}} Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 10649 microseconds, and 4644 cpu microseconds. Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:08:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:08:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:08:00.379501) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 1268743 bytes OK Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:08:00.379525) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:08:00.381763) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:08:00.381784) EVENT_LOG_v1 {"time_micros": 1763892480381778, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:08:00.381804) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1284106, prev total WAL file size 1284716, number of live WAL files 2. Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:08:00.383610) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132383031' seq:72057594037927935, type:22 .. '7061786F73003133303533' seq:0, type:0; will stop at (end) Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(1239KB)], [63(17MB)] Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892480383654, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 20063096, "oldest_snapshot_seqno": -1} Nov 23 05:08:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 13679 keys, 18455266 bytes, temperature: kUnknown Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892480467369, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 18455266, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18375926, "index_size": 44039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34245, "raw_key_size": 368964, "raw_average_key_size": 26, "raw_value_size": 18141761, "raw_average_value_size": 1326, "num_data_blocks": 1637, "num_entries": 13679, "num_filter_entries": 13679, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763892480, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}} Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:08:00.467563) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 18455266 bytes Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:08:00.469633) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 239.5 rd, 220.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 17.9 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(30.4) write-amplify(14.5) OK, records in: 14214, records dropped: 535 output_compression: NoCompression Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:08:00.469652) EVENT_LOG_v1 {"time_micros": 1763892480469644, "job": 38, "event": "compaction_finished", "compaction_time_micros": 83772, "compaction_time_cpu_micros": 48714, "output_level": 6, "num_output_files": 1, "total_output_size": 18455266, "num_input_records": 14214, "num_output_records": 13679, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892480469849, "job": 38, "event": "table_file_deletion", "file_number": 65} Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892480471314, "job": 38, "event": "table_file_deletion", "file_number": 63} Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:08:00.383548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:08:00.471338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:08:00.471342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:08:00.471344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:08:00.471346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:08:00 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:08:00.471348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:08:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:00 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bcb84003-ea75-4d99-8d94-e5e6e07ace02", "format": "json"}]: dispatch Nov 23 05:08:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:08:00.711+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bcb84003-ea75-4d99-8d94-e5e6e07ace02' of type subvolume Nov 23 05:08:00 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bcb84003-ea75-4d99-8d94-e5e6e07ace02' of type subvolume Nov 23 05:08:00 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bcb84003-ea75-4d99-8d94-e5e6e07ace02", "force": true, "format": "json"}]: dispatch Nov 23 05:08:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:08:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bcb84003-ea75-4d99-8d94-e5e6e07ace02'' moved to trashcan Nov 23 05:08:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:08:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bcb84003-ea75-4d99-8d94-e5e6e07ace02, vol_name:cephfs) < "" Nov 23 05:08:01 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:01 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v547: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 491 B/s rd, 138 KiB/s wr, 14 op/s Nov 23 05:08:01 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4206c3a8-effc-409e-87ea-4b9d53523041", "snap_name": "b0e1eebd-2fc1-48d8-874f-e40a8f23a682_a21d3a0e-bd0b-4f17-8e3f-05e089d29de3", "force": true, "format": "json"}]: dispatch Nov 23 05:08:01 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b0e1eebd-2fc1-48d8-874f-e40a8f23a682_a21d3a0e-bd0b-4f17-8e3f-05e089d29de3, sub_name:4206c3a8-effc-409e-87ea-4b9d53523041, vol_name:cephfs) < "" Nov 23 05:08:01 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4206c3a8-effc-409e-87ea-4b9d53523041/.meta.tmp' Nov 23 05:08:01 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4206c3a8-effc-409e-87ea-4b9d53523041/.meta.tmp' to config b'/volumes/_nogroup/4206c3a8-effc-409e-87ea-4b9d53523041/.meta' Nov 23 05:08:01 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b0e1eebd-2fc1-48d8-874f-e40a8f23a682_a21d3a0e-bd0b-4f17-8e3f-05e089d29de3, sub_name:4206c3a8-effc-409e-87ea-4b9d53523041, vol_name:cephfs) < "" Nov 23 05:08:01 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4206c3a8-effc-409e-87ea-4b9d53523041", "snap_name": "b0e1eebd-2fc1-48d8-874f-e40a8f23a682", "force": true, "format": "json"}]: dispatch Nov 23 05:08:01 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b0e1eebd-2fc1-48d8-874f-e40a8f23a682, sub_name:4206c3a8-effc-409e-87ea-4b9d53523041, vol_name:cephfs) < "" Nov 23 05:08:01 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4206c3a8-effc-409e-87ea-4b9d53523041/.meta.tmp' Nov 23 05:08:01 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4206c3a8-effc-409e-87ea-4b9d53523041/.meta.tmp' to config b'/volumes/_nogroup/4206c3a8-effc-409e-87ea-4b9d53523041/.meta' Nov 23 05:08:01 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b0e1eebd-2fc1-48d8-874f-e40a8f23a682, sub_name:4206c3a8-effc-409e-87ea-4b9d53523041, vol_name:cephfs) < "" Nov 23 05:08:02 localhost nova_compute[280939]: 2025-11-23 10:08:02.228 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:08:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v548: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 115 KiB/s wr, 11 op/s Nov 23 05:08:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:08:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:08:03 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 23 05:08:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:08:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 23 05:08:03 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:08:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Nov 23 05:08:03 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 23 05:08:03 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 23 05:08:03 localhost nova_compute[280939]: 2025-11-23 10:08:03.944 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:03 localhost podman[326114]: 2025-11-23 10:08:03.887057295 +0000 UTC m=+0.075656442 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd) Nov 23 05:08:03 localhost podman[326121]: 2025-11-23 10:08:03.952869832 +0000 UTC m=+0.121894466 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Nov 23 05:08:03 localhost podman[326114]: 2025-11-23 10:08:03.972809006 +0000 UTC m=+0.161408143 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:08:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:03 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 23 05:08:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:08:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:08:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:03 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:08:03 localhost podman[326121]: 2025-11-23 10:08:03.99725393 +0000 UTC m=+0.166278504 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118) Nov 23 05:08:04 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:08:04 localhost systemd[1]: tmp-crun.b1IlTU.mount: Deactivated successfully. Nov 23 05:08:04 localhost podman[326115]: 2025-11-23 10:08:04.089571763 +0000 UTC m=+0.270732440 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 05:08:04 localhost podman[326115]: 2025-11-23 10:08:04.100332334 +0000 UTC m=+0.281493051 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:08:04 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:08:04 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:08:04 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 23 05:08:04 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 23 05:08:05 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4206c3a8-effc-409e-87ea-4b9d53523041", "format": "json"}]: dispatch Nov 23 05:08:05 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4206c3a8-effc-409e-87ea-4b9d53523041, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:05 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4206c3a8-effc-409e-87ea-4b9d53523041, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:05 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:08:05.105+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4206c3a8-effc-409e-87ea-4b9d53523041' of type subvolume Nov 23 05:08:05 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4206c3a8-effc-409e-87ea-4b9d53523041' of type subvolume Nov 23 05:08:05 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4206c3a8-effc-409e-87ea-4b9d53523041", "force": true, "format": "json"}]: dispatch Nov 23 05:08:05 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4206c3a8-effc-409e-87ea-4b9d53523041, vol_name:cephfs) < "" Nov 23 05:08:05 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4206c3a8-effc-409e-87ea-4b9d53523041'' moved to trashcan Nov 23 05:08:05 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:08:05 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4206c3a8-effc-409e-87ea-4b9d53523041, vol_name:cephfs) < "" Nov 23 05:08:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v549: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 118 KiB/s wr, 11 op/s Nov 23 05:08:06 localhost openstack_network_exporter[241732]: ERROR 10:08:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:08:06 localhost openstack_network_exporter[241732]: ERROR 10:08:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:08:06 localhost openstack_network_exporter[241732]: ERROR 10:08:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:08:06 localhost openstack_network_exporter[241732]: ERROR 10:08:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:08:06 localhost openstack_network_exporter[241732]: Nov 23 05:08:06 localhost openstack_network_exporter[241732]: ERROR 10:08:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:08:06 localhost openstack_network_exporter[241732]: Nov 23 05:08:07 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "r", "format": "json"}]: dispatch Nov 23 05:08:07 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 23 05:08:07 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:08:07 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:08:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:08:07 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:07 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:07 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:07 localhost nova_compute[280939]: 2025-11-23 10:08:07.229 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:07 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:08:07 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:07 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v550: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 118 KiB/s wr, 11 op/s Nov 23 05:08:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:08:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e252 do_prune osdmap full prune enabled Nov 23 05:08:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e253 e253: 6 total, 6 up, 6 in Nov 23 05:08:08 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e253: 6 total, 6 up, 6 in Nov 23 05:08:08 localhost nova_compute[280939]: 2025-11-23 10:08:08.945 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e253 do_prune osdmap full prune enabled Nov 23 05:08:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e254 e254: 6 total, 6 up, 6 in Nov 23 05:08:09 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e254: 6 total, 6 up, 6 in Nov 23 05:08:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v553: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 143 KiB/s wr, 14 op/s Nov 23 05:08:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:08:09.748 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:08:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:08:09.749 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:08:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:08:09.749 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:08:09 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cd85cd26-79d5-413c-8071-7faf10d1a803", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:08:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cd85cd26-79d5-413c-8071-7faf10d1a803, vol_name:cephfs) < "" Nov 23 05:08:09 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cd85cd26-79d5-413c-8071-7faf10d1a803/.meta.tmp' Nov 23 05:08:09 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cd85cd26-79d5-413c-8071-7faf10d1a803/.meta.tmp' to config b'/volumes/_nogroup/cd85cd26-79d5-413c-8071-7faf10d1a803/.meta' Nov 23 05:08:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cd85cd26-79d5-413c-8071-7faf10d1a803, vol_name:cephfs) < "" Nov 23 05:08:09 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cd85cd26-79d5-413c-8071-7faf10d1a803", "format": "json"}]: dispatch Nov 23 05:08:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cd85cd26-79d5-413c-8071-7faf10d1a803, vol_name:cephfs) < "" Nov 23 05:08:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cd85cd26-79d5-413c-8071-7faf10d1a803, vol_name:cephfs) < "" Nov 23 05:08:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:08:09 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:08:10 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b2b184f0-0dcc-43ed-9860-79f9073a2732", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:08:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b2b184f0-0dcc-43ed-9860-79f9073a2732, vol_name:cephfs) < "" Nov 23 05:08:10 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b2b184f0-0dcc-43ed-9860-79f9073a2732/.meta.tmp' Nov 23 05:08:10 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b2b184f0-0dcc-43ed-9860-79f9073a2732/.meta.tmp' to config b'/volumes/_nogroup/b2b184f0-0dcc-43ed-9860-79f9073a2732/.meta' Nov 23 05:08:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b2b184f0-0dcc-43ed-9860-79f9073a2732, vol_name:cephfs) < "" Nov 23 05:08:10 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b2b184f0-0dcc-43ed-9860-79f9073a2732", "format": "json"}]: dispatch Nov 23 05:08:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b2b184f0-0dcc-43ed-9860-79f9073a2732, vol_name:cephfs) < "" Nov 23 05:08:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b2b184f0-0dcc-43ed-9860-79f9073a2732, vol_name:cephfs) < "" Nov 23 05:08:10 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:08:10 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:08:10 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 23 05:08:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:10 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 23 05:08:10 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:08:10 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Nov 23 05:08:10 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 23 05:08:10 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 23 05:08:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:10 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 23 05:08:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:10 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:08:10 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:08:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:08:10 localhost podman[326179]: 2025-11-23 10:08:10.888877156 +0000 UTC m=+0.075448305 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6) Nov 23 05:08:10 localhost podman[326179]: 2025-11-23 10:08:10.904402334 +0000 UTC m=+0.090973443 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41, vcs-type=git, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, managed_by=edpm_ansible) Nov 23 05:08:10 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:08:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v554: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 143 KiB/s wr, 14 op/s Nov 23 05:08:11 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:08:11 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 23 05:08:11 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 23 05:08:11 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ad2078c1-9958-4c4f-9787-6f0cb9391b42", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:08:11 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ad2078c1-9958-4c4f-9787-6f0cb9391b42, vol_name:cephfs) < "" Nov 23 05:08:12 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ad2078c1-9958-4c4f-9787-6f0cb9391b42/.meta.tmp' Nov 23 05:08:12 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ad2078c1-9958-4c4f-9787-6f0cb9391b42/.meta.tmp' to config b'/volumes/_nogroup/ad2078c1-9958-4c4f-9787-6f0cb9391b42/.meta' Nov 23 05:08:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ad2078c1-9958-4c4f-9787-6f0cb9391b42, vol_name:cephfs) < "" Nov 23 05:08:12 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ad2078c1-9958-4c4f-9787-6f0cb9391b42", "format": "json"}]: dispatch Nov 23 05:08:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ad2078c1-9958-4c4f-9787-6f0cb9391b42, vol_name:cephfs) < "" Nov 23 05:08:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ad2078c1-9958-4c4f-9787-6f0cb9391b42, vol_name:cephfs) < "" Nov 23 05:08:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:08:12 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:08:12 localhost nova_compute[280939]: 2025-11-23 10:08:12.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:08:12 localhost nova_compute[280939]: 2025-11-23 10:08:12.264 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:13 localhost nova_compute[280939]: 2025-11-23 10:08:13.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:08:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:08:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v555: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 63 KiB/s wr, 5 op/s Nov 23 05:08:13 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b2b184f0-0dcc-43ed-9860-79f9073a2732", "snap_name": "28900e81-a351-45cb-9e4a-bbc830e56ec4", "format": "json"}]: dispatch Nov 23 05:08:13 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:28900e81-a351-45cb-9e4a-bbc830e56ec4, sub_name:b2b184f0-0dcc-43ed-9860-79f9073a2732, vol_name:cephfs) < "" Nov 23 05:08:13 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:28900e81-a351-45cb-9e4a-bbc830e56ec4, sub_name:b2b184f0-0dcc-43ed-9860-79f9073a2732, vol_name:cephfs) < "" Nov 23 05:08:13 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:08:13 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 23 05:08:13 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:08:13 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:08:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:08:13 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:13 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:13 localhost ovn_controller[153771]: 2025-11-23T10:08:13Z|00366|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory Nov 23 05:08:13 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:13 localhost nova_compute[280939]: 2025-11-23 10:08:13.975 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:14 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cd85cd26-79d5-413c-8071-7faf10d1a803", "format": "json"}]: dispatch Nov 23 05:08:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cd85cd26-79d5-413c-8071-7faf10d1a803, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cd85cd26-79d5-413c-8071-7faf10d1a803, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:08:14.094+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cd85cd26-79d5-413c-8071-7faf10d1a803' of type subvolume Nov 23 05:08:14 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cd85cd26-79d5-413c-8071-7faf10d1a803' of type subvolume Nov 23 05:08:14 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cd85cd26-79d5-413c-8071-7faf10d1a803", "force": true, "format": "json"}]: dispatch Nov 23 05:08:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cd85cd26-79d5-413c-8071-7faf10d1a803, vol_name:cephfs) < "" Nov 23 05:08:14 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cd85cd26-79d5-413c-8071-7faf10d1a803'' moved to trashcan Nov 23 05:08:14 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:08:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cd85cd26-79d5-413c-8071-7faf10d1a803, vol_name:cephfs) < "" Nov 23 05:08:14 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:08:14 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:14 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:15 localhost nova_compute[280939]: 2025-11-23 10:08:15.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:08:15 localhost nova_compute[280939]: 2025-11-23 10:08:15.134 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 05:08:15 localhost nova_compute[280939]: 2025-11-23 10:08:15.134 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 05:08:15 localhost nova_compute[280939]: 2025-11-23 10:08:15.151 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 05:08:15 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0e4b9e6e-03a5-4c23-a27b-85faef83156b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:08:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0e4b9e6e-03a5-4c23-a27b-85faef83156b, vol_name:cephfs) < "" Nov 23 05:08:15 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0e4b9e6e-03a5-4c23-a27b-85faef83156b/.meta.tmp' Nov 23 05:08:15 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0e4b9e6e-03a5-4c23-a27b-85faef83156b/.meta.tmp' to config b'/volumes/_nogroup/0e4b9e6e-03a5-4c23-a27b-85faef83156b/.meta' Nov 23 05:08:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0e4b9e6e-03a5-4c23-a27b-85faef83156b, vol_name:cephfs) < "" Nov 23 05:08:15 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0e4b9e6e-03a5-4c23-a27b-85faef83156b", "format": "json"}]: dispatch Nov 23 05:08:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0e4b9e6e-03a5-4c23-a27b-85faef83156b, vol_name:cephfs) < "" Nov 23 05:08:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0e4b9e6e-03a5-4c23-a27b-85faef83156b, vol_name:cephfs) < "" Nov 23 05:08:15 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:08:15 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:08:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v556: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 163 KiB/s wr, 13 op/s Nov 23 05:08:16 localhost nova_compute[280939]: 2025-11-23 10:08:16.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:08:16 localhost nova_compute[280939]: 2025-11-23 10:08:16.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Nov 23 05:08:16 localhost nova_compute[280939]: 2025-11-23 10:08:16.149 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Nov 23 05:08:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:08:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:08:16 localhost podman[326198]: 2025-11-23 10:08:16.901308982 +0000 UTC m=+0.086749883 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0) Nov 23 05:08:16 localhost podman[326198]: 2025-11-23 10:08:16.914342564 +0000 UTC m=+0.099783455 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 23 05:08:16 localhost podman[326199]: 2025-11-23 10:08:16.945800723 +0000 UTC m=+0.127347195 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 05:08:16 localhost podman[326199]: 2025-11-23 10:08:16.957599836 +0000 UTC m=+0.139146308 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 05:08:16 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:08:16 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:08:17 localhost podman[239764]: time="2025-11-23T10:08:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:08:17 localhost podman[239764]: @ - - [23/Nov/2025:10:08:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:08:17 localhost podman[239764]: @ - - [23/Nov/2025:10:08:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18765 "" "Go-http-client/1.1" Nov 23 05:08:17 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "format": "json"}]: dispatch Nov 23 05:08:17 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:17 localhost nova_compute[280939]: 2025-11-23 10:08:17.149 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:08:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 23 05:08:17 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:08:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Nov 23 05:08:17 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 23 05:08:17 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 23 05:08:17 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:17 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "format": "json"}]: dispatch Nov 23 05:08:17 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:17 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:08:17 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:08:17 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:17 localhost nova_compute[280939]: 2025-11-23 10:08:17.304 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:17 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:08:17 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 23 05:08:17 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 23 05:08:17 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b2b184f0-0dcc-43ed-9860-79f9073a2732", "snap_name": "28900e81-a351-45cb-9e4a-bbc830e56ec4_6f083e24-ed97-436a-a5aa-b0c285833d8a", "force": true, "format": "json"}]: dispatch Nov 23 05:08:17 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:28900e81-a351-45cb-9e4a-bbc830e56ec4_6f083e24-ed97-436a-a5aa-b0c285833d8a, sub_name:b2b184f0-0dcc-43ed-9860-79f9073a2732, vol_name:cephfs) < "" Nov 23 05:08:17 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b2b184f0-0dcc-43ed-9860-79f9073a2732/.meta.tmp' Nov 23 05:08:17 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b2b184f0-0dcc-43ed-9860-79f9073a2732/.meta.tmp' to config b'/volumes/_nogroup/b2b184f0-0dcc-43ed-9860-79f9073a2732/.meta' Nov 23 05:08:17 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:28900e81-a351-45cb-9e4a-bbc830e56ec4_6f083e24-ed97-436a-a5aa-b0c285833d8a, sub_name:b2b184f0-0dcc-43ed-9860-79f9073a2732, vol_name:cephfs) < "" Nov 23 05:08:17 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b2b184f0-0dcc-43ed-9860-79f9073a2732", "snap_name": "28900e81-a351-45cb-9e4a-bbc830e56ec4", "force": true, "format": "json"}]: dispatch Nov 23 05:08:17 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:28900e81-a351-45cb-9e4a-bbc830e56ec4, sub_name:b2b184f0-0dcc-43ed-9860-79f9073a2732, vol_name:cephfs) < "" Nov 23 05:08:17 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b2b184f0-0dcc-43ed-9860-79f9073a2732/.meta.tmp' Nov 23 05:08:17 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b2b184f0-0dcc-43ed-9860-79f9073a2732/.meta.tmp' to config b'/volumes/_nogroup/b2b184f0-0dcc-43ed-9860-79f9073a2732/.meta' Nov 23 05:08:17 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:28900e81-a351-45cb-9e4a-bbc830e56ec4, sub_name:b2b184f0-0dcc-43ed-9860-79f9073a2732, vol_name:cephfs) < "" Nov 23 05:08:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v557: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 439 B/s rd, 140 KiB/s wr, 11 op/s Nov 23 05:08:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:08:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e254 do_prune osdmap full prune enabled Nov 23 05:08:18 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0e4b9e6e-03a5-4c23-a27b-85faef83156b", "format": "json"}]: dispatch Nov 23 05:08:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0e4b9e6e-03a5-4c23-a27b-85faef83156b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0e4b9e6e-03a5-4c23-a27b-85faef83156b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:18 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:08:18.933+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0e4b9e6e-03a5-4c23-a27b-85faef83156b' of type subvolume Nov 23 05:08:18 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0e4b9e6e-03a5-4c23-a27b-85faef83156b' of type subvolume Nov 23 05:08:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e255 e255: 6 total, 6 up, 6 in Nov 23 05:08:18 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0e4b9e6e-03a5-4c23-a27b-85faef83156b", "force": true, "format": "json"}]: dispatch Nov 23 05:08:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0e4b9e6e-03a5-4c23-a27b-85faef83156b, vol_name:cephfs) < "" Nov 23 05:08:18 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e255: 6 total, 6 up, 6 in Nov 23 05:08:18 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0e4b9e6e-03a5-4c23-a27b-85faef83156b'' moved to trashcan Nov 23 05:08:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:08:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0e4b9e6e-03a5-4c23-a27b-85faef83156b, vol_name:cephfs) < "" Nov 23 05:08:19 localhost nova_compute[280939]: 2025-11-23 10:08:19.002 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v559: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 142 KiB/s wr, 11 op/s Nov 23 05:08:20 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "r", "format": "json"}]: dispatch Nov 23 05:08:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 23 05:08:20 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:08:20 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:08:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:08:20 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:20 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:20 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b2b184f0-0dcc-43ed-9860-79f9073a2732", "format": "json"}]: dispatch Nov 23 05:08:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b2b184f0-0dcc-43ed-9860-79f9073a2732, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b2b184f0-0dcc-43ed-9860-79f9073a2732, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:20 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:08:20.767+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b2b184f0-0dcc-43ed-9860-79f9073a2732' of type subvolume Nov 23 05:08:20 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b2b184f0-0dcc-43ed-9860-79f9073a2732' of type subvolume Nov 23 05:08:20 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b2b184f0-0dcc-43ed-9860-79f9073a2732", "force": true, "format": "json"}]: dispatch Nov 23 05:08:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b2b184f0-0dcc-43ed-9860-79f9073a2732, vol_name:cephfs) < "" Nov 23 05:08:20 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b2b184f0-0dcc-43ed-9860-79f9073a2732'' moved to trashcan Nov 23 05:08:20 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:08:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b2b184f0-0dcc-43ed-9860-79f9073a2732, vol_name:cephfs) < "" Nov 23 05:08:20 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:08:20 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:20 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:21 localhost nova_compute[280939]: 2025-11-23 10:08:21.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:08:21 localhost nova_compute[280939]: 2025-11-23 10:08:21.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:08:21 localhost nova_compute[280939]: 2025-11-23 10:08:21.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 05:08:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v560: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 142 KiB/s wr, 11 op/s Nov 23 05:08:22 localhost nova_compute[280939]: 2025-11-23 10:08:22.134 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:08:22 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ad2078c1-9958-4c4f-9787-6f0cb9391b42", "format": "json"}]: dispatch Nov 23 05:08:22 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ad2078c1-9958-4c4f-9787-6f0cb9391b42, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:22 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ad2078c1-9958-4c4f-9787-6f0cb9391b42, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:22 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:08:22.156+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ad2078c1-9958-4c4f-9787-6f0cb9391b42' of type subvolume Nov 23 05:08:22 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ad2078c1-9958-4c4f-9787-6f0cb9391b42' of type subvolume Nov 23 05:08:22 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ad2078c1-9958-4c4f-9787-6f0cb9391b42", "force": true, "format": "json"}]: dispatch Nov 23 05:08:22 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ad2078c1-9958-4c4f-9787-6f0cb9391b42, vol_name:cephfs) < "" Nov 23 05:08:22 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ad2078c1-9958-4c4f-9787-6f0cb9391b42'' moved to trashcan Nov 23 05:08:22 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:08:22 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ad2078c1-9958-4c4f-9787-6f0cb9391b42, vol_name:cephfs) < "" Nov 23 05:08:22 localhost nova_compute[280939]: 2025-11-23 10:08:22.350 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:08:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_10:08:23 Nov 23 05:08:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 05:08:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 05:08:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['manila_data', 'volumes', 'images', 'vms', 'backups', '.mgr', 'manila_metadata'] Nov 23 05:08:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 05:08:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:08:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:08:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:08:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:08:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:08:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:08:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v561: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 142 KiB/s wr, 11 op/s Nov 23 05:08:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 05:08:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:08:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 05:08:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:08:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Nov 23 05:08:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:08:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Nov 23 05:08:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:08:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 23 05:08:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:08:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 23 05:08:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:08:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.1810441094360693e-06 of space, bias 1.0, pg target 0.00043402777777777775 quantized to 32 (current 32) Nov 23 05:08:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:08:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0009550246894193188 of space, bias 4.0, pg target 0.7601996527777778 quantized to 16 (current 16) Nov 23 05:08:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 05:08:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:08:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 05:08:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:08:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:08:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:08:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:08:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:08:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:08:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:08:24 localhost nova_compute[280939]: 2025-11-23 10:08:24.027 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:24 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "format": "json"}]: dispatch Nov 23 05:08:24 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 23 05:08:24 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:08:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Nov 23 05:08:24 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 23 05:08:24 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 23 05:08:24 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:24 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "format": "json"}]: dispatch Nov 23 05:08:24 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:24 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:08:24 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:08:24 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e255 do_prune osdmap full prune enabled Nov 23 05:08:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e256 e256: 6 total, 6 up, 6 in Nov 23 05:08:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e256: 6 total, 6 up, 6 in Nov 23 05:08:24 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:08:24 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 23 05:08:24 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 23 05:08:25 localhost nova_compute[280939]: 2025-11-23 10:08:25.139 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:08:25 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "561ab685-42f2-4920-b91d-2420296f93f0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:08:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:08:25 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/.meta.tmp' Nov 23 05:08:25 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/.meta.tmp' to config b'/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/.meta' Nov 23 05:08:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:08:25 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "561ab685-42f2-4920-b91d-2420296f93f0", "format": "json"}]: dispatch Nov 23 05:08:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:08:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:08:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:08:25 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:08:25 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7bed2221-edb9-43df-b343-216aa2aa2b37", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:08:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7bed2221-edb9-43df-b343-216aa2aa2b37, vol_name:cephfs) < "" Nov 23 05:08:25 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7bed2221-edb9-43df-b343-216aa2aa2b37/.meta.tmp' Nov 23 05:08:25 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7bed2221-edb9-43df-b343-216aa2aa2b37/.meta.tmp' to config b'/volumes/_nogroup/7bed2221-edb9-43df-b343-216aa2aa2b37/.meta' Nov 23 05:08:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7bed2221-edb9-43df-b343-216aa2aa2b37, vol_name:cephfs) < "" Nov 23 05:08:25 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7bed2221-edb9-43df-b343-216aa2aa2b37", "format": "json"}]: dispatch Nov 23 05:08:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7bed2221-edb9-43df-b343-216aa2aa2b37, vol_name:cephfs) < "" Nov 23 05:08:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7bed2221-edb9-43df-b343-216aa2aa2b37, vol_name:cephfs) < "" Nov 23 05:08:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:08:25 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:08:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v563: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 163 KiB/s wr, 14 op/s Nov 23 05:08:26 localhost nova_compute[280939]: 2025-11-23 10:08:26.131 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:08:26 localhost nova_compute[280939]: 2025-11-23 10:08:26.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:08:26 localhost nova_compute[280939]: 2025-11-23 10:08:26.154 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:08:26 localhost nova_compute[280939]: 2025-11-23 10:08:26.155 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:08:26 localhost nova_compute[280939]: 2025-11-23 10:08:26.155 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:08:26 localhost nova_compute[280939]: 2025-11-23 10:08:26.155 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 05:08:26 localhost nova_compute[280939]: 2025-11-23 10:08:26.156 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:08:26 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:08:26 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2840074883' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:08:26 localhost nova_compute[280939]: 2025-11-23 10:08:26.611 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:08:26 localhost nova_compute[280939]: 2025-11-23 10:08:26.802 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 05:08:26 localhost nova_compute[280939]: 2025-11-23 10:08:26.804 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11438MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 05:08:26 localhost nova_compute[280939]: 2025-11-23 10:08:26.804 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:08:26 localhost nova_compute[280939]: 2025-11-23 10:08:26.805 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:08:26 localhost nova_compute[280939]: 2025-11-23 10:08:26.866 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 05:08:26 localhost nova_compute[280939]: 2025-11-23 10:08:26.866 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 05:08:27 localhost nova_compute[280939]: 2025-11-23 10:08:27.084 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:08:27 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:08:27 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 23 05:08:27 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:08:27 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice_bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:08:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:08:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:08:27 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:27 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:27 localhost nova_compute[280939]: 2025-11-23 10:08:27.391 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:27 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v564: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.0 KiB/s rd, 151 KiB/s wr, 13 op/s Nov 23 05:08:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:08:27 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3185027811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:08:27 localhost nova_compute[280939]: 2025-11-23 10:08:27.588 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:08:27 localhost nova_compute[280939]: 2025-11-23 10:08:27.593 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 05:08:27 localhost nova_compute[280939]: 2025-11-23 10:08:27.607 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 05:08:27 localhost nova_compute[280939]: 2025-11-23 10:08:27.624 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 05:08:27 localhost nova_compute[280939]: 2025-11-23 10:08:27.624 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:08:28 localhost nova_compute[280939]: 2025-11-23 10:08:28.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:08:28 localhost nova_compute[280939]: 2025-11-23 10:08:28.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Nov 23 05:08:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:08:28 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:28 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:28 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "11d20cc2-2db4-4032-9422-8ec84b489c39", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:08:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:11d20cc2-2db4-4032-9422-8ec84b489c39, vol_name:cephfs) < "" Nov 23 05:08:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:08:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/11d20cc2-2db4-4032-9422-8ec84b489c39/.meta.tmp' Nov 23 05:08:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/11d20cc2-2db4-4032-9422-8ec84b489c39/.meta.tmp' to config b'/volumes/_nogroup/11d20cc2-2db4-4032-9422-8ec84b489c39/.meta' Nov 23 05:08:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:11d20cc2-2db4-4032-9422-8ec84b489c39, vol_name:cephfs) < "" Nov 23 05:08:28 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "11d20cc2-2db4-4032-9422-8ec84b489c39", "format": "json"}]: dispatch Nov 23 05:08:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:11d20cc2-2db4-4032-9422-8ec84b489c39, vol_name:cephfs) < "" Nov 23 05:08:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:11d20cc2-2db4-4032-9422-8ec84b489c39, vol_name:cephfs) < "" Nov 23 05:08:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:08:28 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:08:28 localhost systemd[1]: tmp-crun.ZDlfSr.mount: Deactivated successfully. Nov 23 05:08:28 localhost podman[326286]: 2025-11-23 10:08:28.907229279 +0000 UTC m=+0.087792576 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2) Nov 23 05:08:28 localhost podman[326286]: 2025-11-23 10:08:28.916448143 +0000 UTC m=+0.097011480 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 23 05:08:28 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:08:29 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7bed2221-edb9-43df-b343-216aa2aa2b37", "format": "json"}]: dispatch Nov 23 05:08:29 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7bed2221-edb9-43df-b343-216aa2aa2b37, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:29 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7bed2221-edb9-43df-b343-216aa2aa2b37, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:29 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7bed2221-edb9-43df-b343-216aa2aa2b37' of type subvolume Nov 23 05:08:29 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:08:29.019+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7bed2221-edb9-43df-b343-216aa2aa2b37' of type subvolume Nov 23 05:08:29 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7bed2221-edb9-43df-b343-216aa2aa2b37", "force": true, "format": "json"}]: dispatch Nov 23 05:08:29 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7bed2221-edb9-43df-b343-216aa2aa2b37, vol_name:cephfs) < "" Nov 23 05:08:29 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7bed2221-edb9-43df-b343-216aa2aa2b37'' moved to trashcan Nov 23 05:08:29 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:08:29 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7bed2221-edb9-43df-b343-216aa2aa2b37, vol_name:cephfs) < "" Nov 23 05:08:29 localhost nova_compute[280939]: 2025-11-23 10:08:29.069 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:29 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f99c71c2-29eb-4c61-ab24-07800f6e4b24", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:08:29 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f99c71c2-29eb-4c61-ab24-07800f6e4b24, vol_name:cephfs) < "" Nov 23 05:08:29 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f99c71c2-29eb-4c61-ab24-07800f6e4b24/.meta.tmp' Nov 23 05:08:29 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f99c71c2-29eb-4c61-ab24-07800f6e4b24/.meta.tmp' to config b'/volumes/_nogroup/f99c71c2-29eb-4c61-ab24-07800f6e4b24/.meta' Nov 23 05:08:29 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f99c71c2-29eb-4c61-ab24-07800f6e4b24, vol_name:cephfs) < "" Nov 23 05:08:29 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f99c71c2-29eb-4c61-ab24-07800f6e4b24", "format": "json"}]: dispatch Nov 23 05:08:29 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f99c71c2-29eb-4c61-ab24-07800f6e4b24, vol_name:cephfs) < "" Nov 23 05:08:29 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f99c71c2-29eb-4c61-ab24-07800f6e4b24, vol_name:cephfs) < "" Nov 23 05:08:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:08:29 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:08:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v565: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 921 B/s rd, 146 KiB/s wr, 13 op/s Nov 23 05:08:30 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 23 05:08:30 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 23 05:08:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:08:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Nov 23 05:08:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 23 05:08:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 23 05:08:30 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:30 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 23 05:08:30 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:30 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:08:30 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:08:30 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:08:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 23 05:08:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 23 05:08:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v566: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 921 B/s rd, 146 KiB/s wr, 13 op/s Nov 23 05:08:32 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f99c71c2-29eb-4c61-ab24-07800f6e4b24", "auth_id": "tempest-cephx-id-1431575460", "tenant_id": "4d4586d45fe545c4bf78ea7720359b10", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:08:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume authorize, sub_name:f99c71c2-29eb-4c61-ab24-07800f6e4b24, tenant_id:4d4586d45fe545c4bf78ea7720359b10, vol_name:cephfs) < "" Nov 23 05:08:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} v 0) Nov 23 05:08:32 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:08:32 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID tempest-cephx-id-1431575460 with tenant 4d4586d45fe545c4bf78ea7720359b10 Nov 23 05:08:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/f99c71c2-29eb-4c61-ab24-07800f6e4b24/458725ea-f0f9-482a-b03b-0e03f2cd7019", "osd", "allow rw pool=manila_data namespace=fsvolumens_f99c71c2-29eb-4c61-ab24-07800f6e4b24", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:08:32 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/f99c71c2-29eb-4c61-ab24-07800f6e4b24/458725ea-f0f9-482a-b03b-0e03f2cd7019", "osd", "allow rw pool=manila_data namespace=fsvolumens_f99c71c2-29eb-4c61-ab24-07800f6e4b24", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:32 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/f99c71c2-29eb-4c61-ab24-07800f6e4b24/458725ea-f0f9-482a-b03b-0e03f2cd7019", "osd", "allow rw pool=manila_data namespace=fsvolumens_f99c71c2-29eb-4c61-ab24-07800f6e4b24", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:32 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:08:32 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/f99c71c2-29eb-4c61-ab24-07800f6e4b24/458725ea-f0f9-482a-b03b-0e03f2cd7019", "osd", "allow rw pool=manila_data namespace=fsvolumens_f99c71c2-29eb-4c61-ab24-07800f6e4b24", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:32 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/f99c71c2-29eb-4c61-ab24-07800f6e4b24/458725ea-f0f9-482a-b03b-0e03f2cd7019", "osd", "allow rw pool=manila_data namespace=fsvolumens_f99c71c2-29eb-4c61-ab24-07800f6e4b24", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:32 localhost nova_compute[280939]: 2025-11-23 10:08:32.427 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume authorize, sub_name:f99c71c2-29eb-4c61-ab24-07800f6e4b24, tenant_id:4d4586d45fe545c4bf78ea7720359b10, vol_name:cephfs) < "" Nov 23 05:08:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:08:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e256 do_prune osdmap full prune enabled Nov 23 05:08:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e257 e257: 6 total, 6 up, 6 in Nov 23 05:08:33 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e257: 6 total, 6 up, 6 in Nov 23 05:08:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v568: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 989 B/s rd, 157 KiB/s wr, 14 op/s Nov 23 05:08:34 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "r", "format": "json"}]: dispatch Nov 23 05:08:34 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 23 05:08:34 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:08:34 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice_bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:08:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:08:34 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:34 localhost nova_compute[280939]: 2025-11-23 10:08:34.121 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:34 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:34 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:34 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:08:34 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:34 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:08:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:08:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:08:34 localhost systemd[1]: tmp-crun.wi7jqr.mount: Deactivated successfully. Nov 23 05:08:34 localhost podman[326308]: 2025-11-23 10:08:34.899335937 +0000 UTC m=+0.083699639 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 05:08:34 localhost podman[326308]: 2025-11-23 10:08:34.910370357 +0000 UTC m=+0.094734069 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:08:34 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:08:34 localhost systemd[1]: tmp-crun.nzUQw5.mount: Deactivated successfully. Nov 23 05:08:34 localhost podman[326307]: 2025-11-23 10:08:34.954090853 +0000 UTC m=+0.140263791 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 05:08:34 localhost podman[326307]: 2025-11-23 10:08:34.969321043 +0000 UTC m=+0.155494051 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 05:08:34 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:08:35 localhost podman[326309]: 2025-11-23 10:08:35.054926519 +0000 UTC m=+0.234680990 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller) Nov 23 05:08:35 localhost podman[326309]: 2025-11-23 10:08:35.091598729 +0000 UTC m=+0.271353210 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller) Nov 23 05:08:35 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:08:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v569: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 141 KiB/s wr, 11 op/s Nov 23 05:08:35 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f99c71c2-29eb-4c61-ab24-07800f6e4b24", "auth_id": "tempest-cephx-id-1431575460", "format": "json"}]: dispatch Nov 23 05:08:35 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume deauthorize, sub_name:f99c71c2-29eb-4c61-ab24-07800f6e4b24, vol_name:cephfs) < "" Nov 23 05:08:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} v 0) Nov 23 05:08:35 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:08:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} v 0) Nov 23 05:08:35 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch Nov 23 05:08:35 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume deauthorize, sub_name:f99c71c2-29eb-4c61-ab24-07800f6e4b24, vol_name:cephfs) < "" Nov 23 05:08:36 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f99c71c2-29eb-4c61-ab24-07800f6e4b24", "auth_id": "tempest-cephx-id-1431575460", "format": "json"}]: dispatch Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume evict, sub_name:f99c71c2-29eb-4c61-ab24-07800f6e4b24, vol_name:cephfs) < "" Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1431575460, client_metadata.root=/volumes/_nogroup/f99c71c2-29eb-4c61-ab24-07800f6e4b24/458725ea-f0f9-482a-b03b-0e03f2cd7019 Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume evict, sub_name:f99c71c2-29eb-4c61-ab24-07800f6e4b24, vol_name:cephfs) < "" Nov 23 05:08:36 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f99c71c2-29eb-4c61-ab24-07800f6e4b24", "format": "json"}]: dispatch Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f99c71c2-29eb-4c61-ab24-07800f6e4b24, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f99c71c2-29eb-4c61-ab24-07800f6e4b24, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:36 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:08:36.166+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f99c71c2-29eb-4c61-ab24-07800f6e4b24' of type subvolume Nov 23 05:08:36 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f99c71c2-29eb-4c61-ab24-07800f6e4b24' of type subvolume Nov 23 05:08:36 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f99c71c2-29eb-4c61-ab24-07800f6e4b24", "force": true, "format": "json"}]: dispatch Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f99c71c2-29eb-4c61-ab24-07800f6e4b24, vol_name:cephfs) < "" Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f99c71c2-29eb-4c61-ab24-07800f6e4b24'' moved to trashcan Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f99c71c2-29eb-4c61-ab24-07800f6e4b24, vol_name:cephfs) < "" Nov 23 05:08:36 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "11d20cc2-2db4-4032-9422-8ec84b489c39", "format": "json"}]: dispatch Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:11d20cc2-2db4-4032-9422-8ec84b489c39, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:11d20cc2-2db4-4032-9422-8ec84b489c39, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:36 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:08:36.279+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '11d20cc2-2db4-4032-9422-8ec84b489c39' of type subvolume Nov 23 05:08:36 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '11d20cc2-2db4-4032-9422-8ec84b489c39' of type subvolume Nov 23 05:08:36 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "11d20cc2-2db4-4032-9422-8ec84b489c39", "force": true, "format": "json"}]: dispatch Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:11d20cc2-2db4-4032-9422-8ec84b489c39, vol_name:cephfs) < "" Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/11d20cc2-2db4-4032-9422-8ec84b489c39'' moved to trashcan Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:08:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:11d20cc2-2db4-4032-9422-8ec84b489c39, vol_name:cephfs) < "" Nov 23 05:08:36 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:08:36 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch Nov 23 05:08:36 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished Nov 23 05:08:36 localhost openstack_network_exporter[241732]: ERROR 10:08:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:08:36 localhost openstack_network_exporter[241732]: ERROR 10:08:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:08:36 localhost openstack_network_exporter[241732]: ERROR 10:08:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:08:36 localhost openstack_network_exporter[241732]: ERROR 10:08:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:08:36 localhost openstack_network_exporter[241732]: Nov 23 05:08:36 localhost openstack_network_exporter[241732]: ERROR 10:08:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:08:36 localhost openstack_network_exporter[241732]: Nov 23 05:08:37 localhost nova_compute[280939]: 2025-11-23 10:08:37.472 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:37 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 23 05:08:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 23 05:08:37 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:08:37 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Nov 23 05:08:37 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 23 05:08:37 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 23 05:08:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v570: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 141 KiB/s wr, 11 op/s Nov 23 05:08:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:37 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 23 05:08:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:37 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:08:37 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:08:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:08:38 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:08:38 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 23 05:08:38 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 23 05:08:39 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0066f586-ad80-42ee-9cb5-57bd65fc15e2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:08:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0066f586-ad80-42ee-9cb5-57bd65fc15e2, vol_name:cephfs) < "" Nov 23 05:08:39 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0066f586-ad80-42ee-9cb5-57bd65fc15e2/.meta.tmp' Nov 23 05:08:39 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0066f586-ad80-42ee-9cb5-57bd65fc15e2/.meta.tmp' to config b'/volumes/_nogroup/0066f586-ad80-42ee-9cb5-57bd65fc15e2/.meta' Nov 23 05:08:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0066f586-ad80-42ee-9cb5-57bd65fc15e2, vol_name:cephfs) < "" Nov 23 05:08:39 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0066f586-ad80-42ee-9cb5-57bd65fc15e2", "format": "json"}]: dispatch Nov 23 05:08:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0066f586-ad80-42ee-9cb5-57bd65fc15e2, vol_name:cephfs) < "" Nov 23 05:08:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0066f586-ad80-42ee-9cb5-57bd65fc15e2, vol_name:cephfs) < "" Nov 23 05:08:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:08:39 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:08:39 localhost nova_compute[280939]: 2025-11-23 10:08:39.171 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 144 KiB/s wr, 12 op/s Nov 23 05:08:40 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:08:40 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 23 05:08:40 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:08:40 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:08:40 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:08:40 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:40 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:40 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:41 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:08:41 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:41 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v572: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 144 KiB/s wr, 12 op/s Nov 23 05:08:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:08:41 localhost systemd[1]: tmp-crun.MBfM7O.mount: Deactivated successfully. Nov 23 05:08:41 localhost podman[326374]: 2025-11-23 10:08:41.9037648 +0000 UTC m=+0.088429486 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7) Nov 23 05:08:41 localhost podman[326374]: 2025-11-23 10:08:41.921406123 +0000 UTC m=+0.106070809 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, version=9.6) Nov 23 05:08:41 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:08:42 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0066f586-ad80-42ee-9cb5-57bd65fc15e2", "auth_id": "tempest-cephx-id-1431575460", "tenant_id": "4d4586d45fe545c4bf78ea7720359b10", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:08:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume authorize, sub_name:0066f586-ad80-42ee-9cb5-57bd65fc15e2, tenant_id:4d4586d45fe545c4bf78ea7720359b10, vol_name:cephfs) < "" Nov 23 05:08:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} v 0) Nov 23 05:08:42 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:08:42 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID tempest-cephx-id-1431575460 with tenant 4d4586d45fe545c4bf78ea7720359b10 Nov 23 05:08:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/0066f586-ad80-42ee-9cb5-57bd65fc15e2/7acd4543-b091-4fe9-abe4-01b783b9e751", "osd", "allow rw pool=manila_data namespace=fsvolumens_0066f586-ad80-42ee-9cb5-57bd65fc15e2", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:08:42 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/0066f586-ad80-42ee-9cb5-57bd65fc15e2/7acd4543-b091-4fe9-abe4-01b783b9e751", "osd", "allow rw pool=manila_data namespace=fsvolumens_0066f586-ad80-42ee-9cb5-57bd65fc15e2", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:42 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/0066f586-ad80-42ee-9cb5-57bd65fc15e2/7acd4543-b091-4fe9-abe4-01b783b9e751", "osd", "allow rw pool=manila_data namespace=fsvolumens_0066f586-ad80-42ee-9cb5-57bd65fc15e2", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:42 localhost nova_compute[280939]: 2025-11-23 10:08:42.508 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume authorize, sub_name:0066f586-ad80-42ee-9cb5-57bd65fc15e2, tenant_id:4d4586d45fe545c4bf78ea7720359b10, vol_name:cephfs) < "" Nov 23 05:08:42 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:08:42 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/0066f586-ad80-42ee-9cb5-57bd65fc15e2/7acd4543-b091-4fe9-abe4-01b783b9e751", "osd", "allow rw pool=manila_data namespace=fsvolumens_0066f586-ad80-42ee-9cb5-57bd65fc15e2", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:42 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/0066f586-ad80-42ee-9cb5-57bd65fc15e2/7acd4543-b091-4fe9-abe4-01b783b9e751", "osd", "allow rw pool=manila_data namespace=fsvolumens_0066f586-ad80-42ee-9cb5-57bd65fc15e2", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:08:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v573: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 695 B/s rd, 140 KiB/s wr, 12 op/s Nov 23 05:08:44 localhost nova_compute[280939]: 2025-11-23 10:08:44.195 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:44 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 23 05:08:44 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 23 05:08:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:08:44 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Nov 23 05:08:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 23 05:08:44 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 23 05:08:44 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:44 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 23 05:08:44 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:44 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:08:44 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:08:44 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:45 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:08:45 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 23 05:08:45 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 23 05:08:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v574: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 169 KiB/s wr, 15 op/s Nov 23 05:08:46 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0066f586-ad80-42ee-9cb5-57bd65fc15e2", "auth_id": "tempest-cephx-id-1431575460", "format": "json"}]: dispatch Nov 23 05:08:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume deauthorize, sub_name:0066f586-ad80-42ee-9cb5-57bd65fc15e2, vol_name:cephfs) < "" Nov 23 05:08:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} v 0) Nov 23 05:08:46 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:08:46 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} v 0) Nov 23 05:08:46 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch Nov 23 05:08:46 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished Nov 23 05:08:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume deauthorize, sub_name:0066f586-ad80-42ee-9cb5-57bd65fc15e2, vol_name:cephfs) < "" Nov 23 05:08:46 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0066f586-ad80-42ee-9cb5-57bd65fc15e2", "auth_id": "tempest-cephx-id-1431575460", "format": "json"}]: dispatch Nov 23 05:08:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume evict, sub_name:0066f586-ad80-42ee-9cb5-57bd65fc15e2, vol_name:cephfs) < "" Nov 23 05:08:46 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1431575460, client_metadata.root=/volumes/_nogroup/0066f586-ad80-42ee-9cb5-57bd65fc15e2/7acd4543-b091-4fe9-abe4-01b783b9e751 Nov 23 05:08:46 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:08:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume evict, sub_name:0066f586-ad80-42ee-9cb5-57bd65fc15e2, vol_name:cephfs) < "" Nov 23 05:08:46 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:08:46 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch Nov 23 05:08:46 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished Nov 23 05:08:46 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0066f586-ad80-42ee-9cb5-57bd65fc15e2", "format": "json"}]: dispatch Nov 23 05:08:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0066f586-ad80-42ee-9cb5-57bd65fc15e2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0066f586-ad80-42ee-9cb5-57bd65fc15e2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:46 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0066f586-ad80-42ee-9cb5-57bd65fc15e2' of type subvolume Nov 23 05:08:46 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:08:46.396+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0066f586-ad80-42ee-9cb5-57bd65fc15e2' of type subvolume Nov 23 05:08:46 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0066f586-ad80-42ee-9cb5-57bd65fc15e2", "force": true, "format": "json"}]: dispatch Nov 23 05:08:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0066f586-ad80-42ee-9cb5-57bd65fc15e2, vol_name:cephfs) < "" Nov 23 05:08:46 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0066f586-ad80-42ee-9cb5-57bd65fc15e2'' moved to trashcan Nov 23 05:08:46 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:08:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0066f586-ad80-42ee-9cb5-57bd65fc15e2, vol_name:cephfs) < "" Nov 23 05:08:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:08:46.894 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:08:46 localhost nova_compute[280939]: 2025-11-23 10:08:46.895 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:46 localhost ovn_metadata_agent[159410]: 2025-11-23 10:08:46.896 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 05:08:47 localhost podman[239764]: time="2025-11-23T10:08:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:08:47 localhost podman[239764]: @ - - [23/Nov/2025:10:08:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:08:47 localhost podman[239764]: @ - - [23/Nov/2025:10:08:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18769 "" "Go-http-client/1.1" Nov 23 05:08:47 localhost nova_compute[280939]: 2025-11-23 10:08:47.548 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v575: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 117 KiB/s wr, 10 op/s Nov 23 05:08:47 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "r", "format": "json"}]: dispatch Nov 23 05:08:47 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 23 05:08:47 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:08:47 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:08:47 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:08:47 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:47 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:08:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:08:47 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:47 localhost podman[326396]: 2025-11-23 10:08:47.906055281 +0000 UTC m=+0.089469587 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute) Nov 23 05:08:47 localhost podman[326396]: 2025-11-23 10:08:47.947704253 +0000 UTC m=+0.131118539 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3) Nov 23 05:08:47 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:08:47 localhost podman[326397]: 2025-11-23 10:08:47.967244035 +0000 UTC m=+0.149092273 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:08:47 localhost podman[326397]: 2025-11-23 10:08:47.976516061 +0000 UTC m=+0.158364289 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 05:08:47 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:08:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:08:48 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:08:48 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:48 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:49 localhost nova_compute[280939]: 2025-11-23 10:08:49.237 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:49 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "01ed1422-dec9-4d44-991d-9583c95296ac", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:08:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:01ed1422-dec9-4d44-991d-9583c95296ac, vol_name:cephfs) < "" Nov 23 05:08:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 05:08:49 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 05:08:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 05:08:49 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:08:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 05:08:49 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:08:49 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev d6cb9c17-3ea0-47d5-8f3b-ad1699f2eb2b (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:08:49 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev d6cb9c17-3ea0-47d5-8f3b-ad1699f2eb2b (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:08:49 localhost ceph-mgr[286671]: [progress INFO root] Completed event d6cb9c17-3ea0-47d5-8f3b-ad1699f2eb2b (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 05:08:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 05:08:49 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 05:08:49 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/01ed1422-dec9-4d44-991d-9583c95296ac/.meta.tmp' Nov 23 05:08:49 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/01ed1422-dec9-4d44-991d-9583c95296ac/.meta.tmp' to config b'/volumes/_nogroup/01ed1422-dec9-4d44-991d-9583c95296ac/.meta' Nov 23 05:08:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:01ed1422-dec9-4d44-991d-9583c95296ac, vol_name:cephfs) < "" Nov 23 05:08:49 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "01ed1422-dec9-4d44-991d-9583c95296ac", "format": "json"}]: dispatch Nov 23 05:08:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:01ed1422-dec9-4d44-991d-9583c95296ac, vol_name:cephfs) < "" Nov 23 05:08:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:01ed1422-dec9-4d44-991d-9583c95296ac, vol_name:cephfs) < "" Nov 23 05:08:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:08:49 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:08:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v576: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 178 KiB/s wr, 15 op/s Nov 23 05:08:50 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:08:50 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:08:50 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 23 05:08:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:51 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Nov 23 05:08:51 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 23 05:08:51 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:08:51 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Nov 23 05:08:51 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 23 05:08:51 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 23 05:08:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:51 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 23 05:08:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:08:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:08:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:51 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:08:51 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 23 05:08:51 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 23 05:08:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v577: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 111 KiB/s wr, 9 op/s Nov 23 05:08:51 localhost ovn_metadata_agent[159410]: 2025-11-23 10:08:51.897 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 05:08:52 localhost nova_compute[280939]: 2025-11-23 10:08:52.578 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:52 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "01ed1422-dec9-4d44-991d-9583c95296ac", "auth_id": "tempest-cephx-id-1431575460", "tenant_id": "4d4586d45fe545c4bf78ea7720359b10", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:08:52 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume authorize, sub_name:01ed1422-dec9-4d44-991d-9583c95296ac, tenant_id:4d4586d45fe545c4bf78ea7720359b10, vol_name:cephfs) < "" Nov 23 05:08:52 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} v 0) Nov 23 05:08:52 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:08:52 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID tempest-cephx-id-1431575460 with tenant 4d4586d45fe545c4bf78ea7720359b10 Nov 23 05:08:52 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/01ed1422-dec9-4d44-991d-9583c95296ac/7b7be716-7e6a-49c4-a977-3707f1cc08b5", "osd", "allow rw pool=manila_data namespace=fsvolumens_01ed1422-dec9-4d44-991d-9583c95296ac", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:08:52 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/01ed1422-dec9-4d44-991d-9583c95296ac/7b7be716-7e6a-49c4-a977-3707f1cc08b5", "osd", "allow rw pool=manila_data namespace=fsvolumens_01ed1422-dec9-4d44-991d-9583c95296ac", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:52 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/01ed1422-dec9-4d44-991d-9583c95296ac/7b7be716-7e6a-49c4-a977-3707f1cc08b5", "osd", "allow rw pool=manila_data namespace=fsvolumens_01ed1422-dec9-4d44-991d-9583c95296ac", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:52 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume authorize, sub_name:01ed1422-dec9-4d44-991d-9583c95296ac, tenant_id:4d4586d45fe545c4bf78ea7720359b10, vol_name:cephfs) < "" Nov 23 05:08:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:08:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:08:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:08:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:08:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:08:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:08:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:08:53 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:08:53 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/01ed1422-dec9-4d44-991d-9583c95296ac/7b7be716-7e6a-49c4-a977-3707f1cc08b5", "osd", "allow rw pool=manila_data namespace=fsvolumens_01ed1422-dec9-4d44-991d-9583c95296ac", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:53 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/01ed1422-dec9-4d44-991d-9583c95296ac/7b7be716-7e6a-49c4-a977-3707f1cc08b5", "osd", "allow rw pool=manila_data namespace=fsvolumens_01ed1422-dec9-4d44-991d-9583c95296ac", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v578: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 111 KiB/s wr, 9 op/s Nov 23 05:08:53 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 05:08:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 05:08:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:08:53 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "185a1676-923a-42e3-bf21-4993a35dc3a8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:08:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:185a1676-923a-42e3-bf21-4993a35dc3a8, vol_name:cephfs) < "" Nov 23 05:08:53 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/185a1676-923a-42e3-bf21-4993a35dc3a8/.meta.tmp' Nov 23 05:08:53 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/185a1676-923a-42e3-bf21-4993a35dc3a8/.meta.tmp' to config b'/volumes/_nogroup/185a1676-923a-42e3-bf21-4993a35dc3a8/.meta' Nov 23 05:08:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:185a1676-923a-42e3-bf21-4993a35dc3a8, vol_name:cephfs) < "" Nov 23 05:08:53 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "185a1676-923a-42e3-bf21-4993a35dc3a8", "format": "json"}]: dispatch Nov 23 05:08:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:185a1676-923a-42e3-bf21-4993a35dc3a8, vol_name:cephfs) < "" Nov 23 05:08:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:185a1676-923a-42e3-bf21-4993a35dc3a8, vol_name:cephfs) < "" Nov 23 05:08:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:08:53 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:08:54 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:08:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:54 localhost nova_compute[280939]: 2025-11-23 10:08:54.274 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 23 05:08:54 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:08:54 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:08:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:08:54 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:54 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:08:54 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:08:54 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:08:54 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:54 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v579: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 166 KiB/s wr, 14 op/s Nov 23 05:08:56 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "01ed1422-dec9-4d44-991d-9583c95296ac", "auth_id": "tempest-cephx-id-1431575460", "format": "json"}]: dispatch Nov 23 05:08:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume deauthorize, sub_name:01ed1422-dec9-4d44-991d-9583c95296ac, vol_name:cephfs) < "" Nov 23 05:08:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} v 0) Nov 23 05:08:56 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:08:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} v 0) Nov 23 05:08:56 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch Nov 23 05:08:56 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished Nov 23 05:08:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume deauthorize, sub_name:01ed1422-dec9-4d44-991d-9583c95296ac, vol_name:cephfs) < "" Nov 23 05:08:56 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "01ed1422-dec9-4d44-991d-9583c95296ac", "auth_id": "tempest-cephx-id-1431575460", "format": "json"}]: dispatch Nov 23 05:08:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume evict, sub_name:01ed1422-dec9-4d44-991d-9583c95296ac, vol_name:cephfs) < "" Nov 23 05:08:56 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1431575460, client_metadata.root=/volumes/_nogroup/01ed1422-dec9-4d44-991d-9583c95296ac/7b7be716-7e6a-49c4-a977-3707f1cc08b5 Nov 23 05:08:56 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:08:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume evict, sub_name:01ed1422-dec9-4d44-991d-9583c95296ac, vol_name:cephfs) < "" Nov 23 05:08:56 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "01ed1422-dec9-4d44-991d-9583c95296ac", "format": "json"}]: dispatch Nov 23 05:08:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:01ed1422-dec9-4d44-991d-9583c95296ac, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:01ed1422-dec9-4d44-991d-9583c95296ac, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:08:56 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:08:56.462+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '01ed1422-dec9-4d44-991d-9583c95296ac' of type subvolume Nov 23 05:08:56 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '01ed1422-dec9-4d44-991d-9583c95296ac' of type subvolume Nov 23 05:08:56 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "01ed1422-dec9-4d44-991d-9583c95296ac", "force": true, "format": "json"}]: dispatch Nov 23 05:08:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:01ed1422-dec9-4d44-991d-9583c95296ac, vol_name:cephfs) < "" Nov 23 05:08:56 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/01ed1422-dec9-4d44-991d-9583c95296ac'' moved to trashcan Nov 23 05:08:56 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:08:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:01ed1422-dec9-4d44-991d-9583c95296ac, vol_name:cephfs) < "" Nov 23 05:08:56 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:08:56 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch Nov 23 05:08:56 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished Nov 23 05:08:57 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "30bb7bc1-b3ad-4313-b9fb-0702c023b9cf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:08:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:30bb7bc1-b3ad-4313-b9fb-0702c023b9cf, vol_name:cephfs) < "" Nov 23 05:08:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/30bb7bc1-b3ad-4313-b9fb-0702c023b9cf/.meta.tmp' Nov 23 05:08:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/30bb7bc1-b3ad-4313-b9fb-0702c023b9cf/.meta.tmp' to config b'/volumes/_nogroup/30bb7bc1-b3ad-4313-b9fb-0702c023b9cf/.meta' Nov 23 05:08:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:30bb7bc1-b3ad-4313-b9fb-0702c023b9cf, vol_name:cephfs) < "" Nov 23 05:08:57 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "30bb7bc1-b3ad-4313-b9fb-0702c023b9cf", "format": "json"}]: dispatch Nov 23 05:08:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:30bb7bc1-b3ad-4313-b9fb-0702c023b9cf, vol_name:cephfs) < "" Nov 23 05:08:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:30bb7bc1-b3ad-4313-b9fb-0702c023b9cf, vol_name:cephfs) < "" Nov 23 05:08:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:08:57 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:08:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v580: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 117 KiB/s wr, 9 op/s Nov 23 05:08:57 localhost nova_compute[280939]: 2025-11-23 10:08:57.624 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:57 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "format": "json"}]: dispatch Nov 23 05:08:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 23 05:08:57 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:08:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Nov 23 05:08:57 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 23 05:08:57 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 23 05:08:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:57 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "format": "json"}]: dispatch Nov 23 05:08:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:08:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:08:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:08:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:08:58 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:08:58 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 23 05:08:58 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 23 05:08:59 localhost nova_compute[280939]: 2025-11-23 10:08:59.316 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:08:59 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "561ab685-42f2-4920-b91d-2420296f93f0", "auth_id": "tempest-cephx-id-1431575460", "tenant_id": "4d4586d45fe545c4bf78ea7720359b10", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:08:59 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume authorize, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, tenant_id:4d4586d45fe545c4bf78ea7720359b10, vol_name:cephfs) < "" Nov 23 05:08:59 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} v 0) Nov 23 05:08:59 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:08:59 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID tempest-cephx-id-1431575460 with tenant 4d4586d45fe545c4bf78ea7720359b10 Nov 23 05:08:59 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:08:59 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:59 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:59 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume authorize, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, tenant_id:4d4586d45fe545c4bf78ea7720359b10, vol_name:cephfs) < "" Nov 23 05:08:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v581: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 190 KiB/s wr, 16 op/s Nov 23 05:08:59 localhost sshd[326528]: main: sshd: ssh-rsa algorithm is disabled Nov 23 05:08:59 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:08:59 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:08:59 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:08:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:08:59 localhost systemd[1]: tmp-crun.RtUhNG.mount: Deactivated successfully. Nov 23 05:08:59 localhost podman[326530]: 2025-11-23 10:08:59.910049957 +0000 UTC m=+0.096225156 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2) Nov 23 05:08:59 localhost podman[326530]: 2025-11-23 10:08:59.944447516 +0000 UTC m=+0.130622675 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:08:59 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:09:00 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "r", "format": "json"}]: dispatch Nov 23 05:09:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:09:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 23 05:09:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:09:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:09:00 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:09:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:00 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:01 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:09:01 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "30bb7bc1-b3ad-4313-b9fb-0702c023b9cf", "format": "json"}]: dispatch Nov 23 05:09:01 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:30bb7bc1-b3ad-4313-b9fb-0702c023b9cf, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:01 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:30bb7bc1-b3ad-4313-b9fb-0702c023b9cf, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:01 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:09:01.446+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '30bb7bc1-b3ad-4313-b9fb-0702c023b9cf' of type subvolume Nov 23 05:09:01 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '30bb7bc1-b3ad-4313-b9fb-0702c023b9cf' of type subvolume Nov 23 05:09:01 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "30bb7bc1-b3ad-4313-b9fb-0702c023b9cf", "force": true, "format": "json"}]: dispatch Nov 23 05:09:01 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:30bb7bc1-b3ad-4313-b9fb-0702c023b9cf, vol_name:cephfs) < "" Nov 23 05:09:01 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/30bb7bc1-b3ad-4313-b9fb-0702c023b9cf'' moved to trashcan Nov 23 05:09:01 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:09:01 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:30bb7bc1-b3ad-4313-b9fb-0702c023b9cf, vol_name:cephfs) < "" Nov 23 05:09:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v582: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 128 KiB/s wr, 10 op/s Nov 23 05:09:01 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:09:01 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:01 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:02 localhost nova_compute[280939]: 2025-11-23 10:09:02.670 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:03 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "561ab685-42f2-4920-b91d-2420296f93f0", "auth_id": "tempest-cephx-id-1431575460", "format": "json"}]: dispatch Nov 23 05:09:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume deauthorize, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} v 0) Nov 23 05:09:03 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:09:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} v 0) Nov 23 05:09:03 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch Nov 23 05:09:03 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished Nov 23 05:09:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume deauthorize, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:03 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "561ab685-42f2-4920-b91d-2420296f93f0", "auth_id": "tempest-cephx-id-1431575460", "format": "json"}]: dispatch Nov 23 05:09:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume evict, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1431575460, client_metadata.root=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997 Nov 23 05:09:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:09:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume evict, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:09:03 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d74ea3e8-4393-4bff-9c03-d13b44062e75", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:09:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d74ea3e8-4393-4bff-9c03-d13b44062e75, vol_name:cephfs) < "" Nov 23 05:09:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v583: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 128 KiB/s wr, 10 op/s Nov 23 05:09:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d74ea3e8-4393-4bff-9c03-d13b44062e75/.meta.tmp' Nov 23 05:09:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d74ea3e8-4393-4bff-9c03-d13b44062e75/.meta.tmp' to config b'/volumes/_nogroup/d74ea3e8-4393-4bff-9c03-d13b44062e75/.meta' Nov 23 05:09:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d74ea3e8-4393-4bff-9c03-d13b44062e75, vol_name:cephfs) < "" Nov 23 05:09:03 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d74ea3e8-4393-4bff-9c03-d13b44062e75", "format": "json"}]: dispatch Nov 23 05:09:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d74ea3e8-4393-4bff-9c03-d13b44062e75, vol_name:cephfs) < "" Nov 23 05:09:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d74ea3e8-4393-4bff-9c03-d13b44062e75, vol_name:cephfs) < "" Nov 23 05:09:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:09:03 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:09:03 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:09:03 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch Nov 23 05:09:03 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished Nov 23 05:09:04 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "format": "json"}]: dispatch Nov 23 05:09:04 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 23 05:09:04 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:09:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Nov 23 05:09:04 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 23 05:09:04 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 23 05:09:04 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:04 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice", "format": "json"}]: dispatch Nov 23 05:09:04 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:04 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:09:04 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:09:04 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:04 localhost nova_compute[280939]: 2025-11-23 10:09:04.357 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:04 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dce3a629-c5fc-45c8-88f2-5a0994d73c89", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:09:04 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dce3a629-c5fc-45c8-88f2-5a0994d73c89, vol_name:cephfs) < "" Nov 23 05:09:04 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dce3a629-c5fc-45c8-88f2-5a0994d73c89/.meta.tmp' Nov 23 05:09:04 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dce3a629-c5fc-45c8-88f2-5a0994d73c89/.meta.tmp' to config b'/volumes/_nogroup/dce3a629-c5fc-45c8-88f2-5a0994d73c89/.meta' Nov 23 05:09:04 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dce3a629-c5fc-45c8-88f2-5a0994d73c89, vol_name:cephfs) < "" Nov 23 05:09:04 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dce3a629-c5fc-45c8-88f2-5a0994d73c89", "format": "json"}]: dispatch Nov 23 05:09:04 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dce3a629-c5fc-45c8-88f2-5a0994d73c89, vol_name:cephfs) < "" Nov 23 05:09:04 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dce3a629-c5fc-45c8-88f2-5a0994d73c89, vol_name:cephfs) < "" Nov 23 05:09:04 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:09:04 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:09:04 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 23 05:09:04 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 23 05:09:04 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 23 05:09:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v584: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 198 KiB/s wr, 17 op/s Nov 23 05:09:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:09:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:09:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:09:05 localhost systemd[1]: tmp-crun.LW4ezp.mount: Deactivated successfully. Nov 23 05:09:05 localhost podman[326549]: 2025-11-23 10:09:05.910414949 +0000 UTC m=+0.094335076 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:09:05 localhost podman[326550]: 2025-11-23 10:09:05.955777567 +0000 UTC m=+0.137998962 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 05:09:05 localhost podman[326550]: 2025-11-23 10:09:05.968411066 +0000 UTC m=+0.150632411 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 05:09:05 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:09:06 localhost podman[326549]: 2025-11-23 10:09:06.071294576 +0000 UTC m=+0.255214663 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118) Nov 23 05:09:06 localhost podman[326551]: 2025-11-23 10:09:06.091198209 +0000 UTC m=+0.270816504 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 05:09:06 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:09:06 localhost podman[326551]: 2025-11-23 10:09:06.159472611 +0000 UTC m=+0.339090906 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118) Nov 23 05:09:06 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:09:06 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "561ab685-42f2-4920-b91d-2420296f93f0", "auth_id": "tempest-cephx-id-1431575460", "tenant_id": "4d4586d45fe545c4bf78ea7720359b10", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:09:06 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume authorize, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, tenant_id:4d4586d45fe545c4bf78ea7720359b10, vol_name:cephfs) < "" Nov 23 05:09:06 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} v 0) Nov 23 05:09:06 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:09:06 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID tempest-cephx-id-1431575460 with tenant 4d4586d45fe545c4bf78ea7720359b10 Nov 23 05:09:06 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:09:06 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:06 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:06 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume authorize, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, tenant_id:4d4586d45fe545c4bf78ea7720359b10, vol_name:cephfs) < "" Nov 23 05:09:06 localhost openstack_network_exporter[241732]: ERROR 10:09:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:09:06 localhost openstack_network_exporter[241732]: ERROR 10:09:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:09:06 localhost openstack_network_exporter[241732]: ERROR 10:09:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:09:06 localhost openstack_network_exporter[241732]: ERROR 10:09:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:09:06 localhost openstack_network_exporter[241732]: Nov 23 05:09:06 localhost openstack_network_exporter[241732]: ERROR 10:09:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:09:06 localhost openstack_network_exporter[241732]: Nov 23 05:09:07 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:09:07 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:07 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:07 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:09:07 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:09:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 23 05:09:07 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:09:07 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice_bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:09:07 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:09:07 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:07 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:07 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:09:07 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d74ea3e8-4393-4bff-9c03-d13b44062e75", "format": "json"}]: dispatch Nov 23 05:09:07 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d74ea3e8-4393-4bff-9c03-d13b44062e75, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:07 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d74ea3e8-4393-4bff-9c03-d13b44062e75, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:07 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd74ea3e8-4393-4bff-9c03-d13b44062e75' of type subvolume Nov 23 05:09:07 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:09:07.587+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd74ea3e8-4393-4bff-9c03-d13b44062e75' of type subvolume Nov 23 05:09:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v585: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 142 KiB/s wr, 12 op/s Nov 23 05:09:07 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d74ea3e8-4393-4bff-9c03-d13b44062e75", "force": true, "format": "json"}]: dispatch Nov 23 05:09:07 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d74ea3e8-4393-4bff-9c03-d13b44062e75, vol_name:cephfs) < "" Nov 23 05:09:07 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d74ea3e8-4393-4bff-9c03-d13b44062e75'' moved to trashcan Nov 23 05:09:07 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:09:07 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d74ea3e8-4393-4bff-9c03-d13b44062e75, vol_name:cephfs) < "" Nov 23 05:09:07 localhost nova_compute[280939]: 2025-11-23 10:09:07.703 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:08 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:09:08 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:08 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:09:09 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dce3a629-c5fc-45c8-88f2-5a0994d73c89", "format": "json"}]: dispatch Nov 23 05:09:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dce3a629-c5fc-45c8-88f2-5a0994d73c89, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dce3a629-c5fc-45c8-88f2-5a0994d73c89, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:09 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:09:09.230+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dce3a629-c5fc-45c8-88f2-5a0994d73c89' of type subvolume Nov 23 05:09:09 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dce3a629-c5fc-45c8-88f2-5a0994d73c89' of type subvolume Nov 23 05:09:09 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dce3a629-c5fc-45c8-88f2-5a0994d73c89", "force": true, "format": "json"}]: dispatch Nov 23 05:09:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dce3a629-c5fc-45c8-88f2-5a0994d73c89, vol_name:cephfs) < "" Nov 23 05:09:09 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dce3a629-c5fc-45c8-88f2-5a0994d73c89'' moved to trashcan Nov 23 05:09:09 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:09:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dce3a629-c5fc-45c8-88f2-5a0994d73c89, vol_name:cephfs) < "" Nov 23 05:09:09 localhost nova_compute[280939]: 2025-11-23 10:09:09.387 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v586: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 210 KiB/s wr, 18 op/s Nov 23 05:09:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:09:09.749 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:09:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:09:09.749 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:09:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:09:09.750 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:09:09 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "561ab685-42f2-4920-b91d-2420296f93f0", "auth_id": "tempest-cephx-id-1431575460", "format": "json"}]: dispatch Nov 23 05:09:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume deauthorize, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} v 0) Nov 23 05:09:09 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:09:09 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} v 0) Nov 23 05:09:09 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch Nov 23 05:09:09 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished Nov 23 05:09:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume deauthorize, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:09 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "561ab685-42f2-4920-b91d-2420296f93f0", "auth_id": "tempest-cephx-id-1431575460", "format": "json"}]: dispatch Nov 23 05:09:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume evict, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:10 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1431575460, client_metadata.root=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997 Nov 23 05:09:10 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:09:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume evict, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:10 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:09:10 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch Nov 23 05:09:10 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished Nov 23 05:09:10 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "13173eeb-f983-4f00-b7e6-2bd2cef8c0e1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:09:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:13173eeb-f983-4f00-b7e6-2bd2cef8c0e1, vol_name:cephfs) < "" Nov 23 05:09:10 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/13173eeb-f983-4f00-b7e6-2bd2cef8c0e1/.meta.tmp' Nov 23 05:09:10 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/13173eeb-f983-4f00-b7e6-2bd2cef8c0e1/.meta.tmp' to config b'/volumes/_nogroup/13173eeb-f983-4f00-b7e6-2bd2cef8c0e1/.meta' Nov 23 05:09:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:13173eeb-f983-4f00-b7e6-2bd2cef8c0e1, vol_name:cephfs) < "" Nov 23 05:09:10 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "13173eeb-f983-4f00-b7e6-2bd2cef8c0e1", "format": "json"}]: dispatch Nov 23 05:09:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:13173eeb-f983-4f00-b7e6-2bd2cef8c0e1, vol_name:cephfs) < "" Nov 23 05:09:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:13173eeb-f983-4f00-b7e6-2bd2cef8c0e1, vol_name:cephfs) < "" Nov 23 05:09:10 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:09:10 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:09:11 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 23 05:09:11 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:11 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 23 05:09:11 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:09:11 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Nov 23 05:09:11 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 23 05:09:11 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 23 05:09:11 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:11 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 23 05:09:11 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:11 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:09:11 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:09:11 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v587: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 137 KiB/s wr, 11 op/s Nov 23 05:09:11 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:09:11 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 23 05:09:11 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 23 05:09:12 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6f0e6dc5-2184-4beb-90f6-26a08efe35ab", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:09:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6f0e6dc5-2184-4beb-90f6-26a08efe35ab, vol_name:cephfs) < "" Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:09:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:09:12 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6f0e6dc5-2184-4beb-90f6-26a08efe35ab/.meta.tmp' Nov 23 05:09:12 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6f0e6dc5-2184-4beb-90f6-26a08efe35ab/.meta.tmp' to config b'/volumes/_nogroup/6f0e6dc5-2184-4beb-90f6-26a08efe35ab/.meta' Nov 23 05:09:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6f0e6dc5-2184-4beb-90f6-26a08efe35ab, vol_name:cephfs) < "" Nov 23 05:09:12 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6f0e6dc5-2184-4beb-90f6-26a08efe35ab", "format": "json"}]: dispatch Nov 23 05:09:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6f0e6dc5-2184-4beb-90f6-26a08efe35ab, vol_name:cephfs) < "" Nov 23 05:09:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6f0e6dc5-2184-4beb-90f6-26a08efe35ab, vol_name:cephfs) < "" Nov 23 05:09:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:09:12 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:09:12 localhost nova_compute[280939]: 2025-11-23 10:09:12.736 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:09:12 localhost podman[326614]: 2025-11-23 10:09:12.898557462 +0000 UTC m=+0.085051331 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vendor=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 23 05:09:12 localhost podman[326614]: 2025-11-23 10:09:12.91345563 +0000 UTC m=+0.099949489 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, name=ubi9-minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, architecture=x86_64, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible) Nov 23 05:09:12 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:09:13 localhost nova_compute[280939]: 2025-11-23 10:09:13.162 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:09:13 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "561ab685-42f2-4920-b91d-2420296f93f0", "auth_id": "tempest-cephx-id-1431575460", "tenant_id": "4d4586d45fe545c4bf78ea7720359b10", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:09:13 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume authorize, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, tenant_id:4d4586d45fe545c4bf78ea7720359b10, vol_name:cephfs) < "" Nov 23 05:09:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} v 0) Nov 23 05:09:13 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:09:13 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID tempest-cephx-id-1431575460 with tenant 4d4586d45fe545c4bf78ea7720359b10 Nov 23 05:09:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:09:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:09:13 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:13 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:13 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume authorize, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, tenant_id:4d4586d45fe545c4bf78ea7720359b10, vol_name:cephfs) < "" Nov 23 05:09:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v588: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 137 KiB/s wr, 11 op/s Nov 23 05:09:13 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:09:13 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:13 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:14 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "r", "format": "json"}]: dispatch Nov 23 05:09:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:09:14 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 23 05:09:14 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:09:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice_bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:09:14 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:09:14 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:14 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:09:14 localhost nova_compute[280939]: 2025-11-23 10:09:14.428 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:14 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "13173eeb-f983-4f00-b7e6-2bd2cef8c0e1", "format": "json"}]: dispatch Nov 23 05:09:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:13173eeb-f983-4f00-b7e6-2bd2cef8c0e1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:13173eeb-f983-4f00-b7e6-2bd2cef8c0e1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:09:14.532+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '13173eeb-f983-4f00-b7e6-2bd2cef8c0e1' of type subvolume Nov 23 05:09:14 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '13173eeb-f983-4f00-b7e6-2bd2cef8c0e1' of type subvolume Nov 23 05:09:14 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "13173eeb-f983-4f00-b7e6-2bd2cef8c0e1", "force": true, "format": "json"}]: dispatch Nov 23 05:09:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:13173eeb-f983-4f00-b7e6-2bd2cef8c0e1, vol_name:cephfs) < "" Nov 23 05:09:14 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/13173eeb-f983-4f00-b7e6-2bd2cef8c0e1'' moved to trashcan Nov 23 05:09:14 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:09:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:13173eeb-f983-4f00-b7e6-2bd2cef8c0e1, vol_name:cephfs) < "" Nov 23 05:09:14 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:09:14 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:14 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:15 localhost nova_compute[280939]: 2025-11-23 10:09:15.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:09:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v589: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 852 B/s rd, 228 KiB/s wr, 20 op/s Nov 23 05:09:15 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6f0e6dc5-2184-4beb-90f6-26a08efe35ab", "format": "json"}]: dispatch Nov 23 05:09:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6f0e6dc5-2184-4beb-90f6-26a08efe35ab, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6f0e6dc5-2184-4beb-90f6-26a08efe35ab, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:15 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6f0e6dc5-2184-4beb-90f6-26a08efe35ab' of type subvolume Nov 23 05:09:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:09:15.981+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6f0e6dc5-2184-4beb-90f6-26a08efe35ab' of type subvolume Nov 23 05:09:15 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6f0e6dc5-2184-4beb-90f6-26a08efe35ab", "force": true, "format": "json"}]: dispatch Nov 23 05:09:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6f0e6dc5-2184-4beb-90f6-26a08efe35ab, vol_name:cephfs) < "" Nov 23 05:09:15 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6f0e6dc5-2184-4beb-90f6-26a08efe35ab'' moved to trashcan Nov 23 05:09:15 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:09:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6f0e6dc5-2184-4beb-90f6-26a08efe35ab, vol_name:cephfs) < "" Nov 23 05:09:16 localhost nova_compute[280939]: 2025-11-23 10:09:16.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:09:16 localhost nova_compute[280939]: 2025-11-23 10:09:16.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 05:09:16 localhost nova_compute[280939]: 2025-11-23 10:09:16.134 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 05:09:16 localhost nova_compute[280939]: 2025-11-23 10:09:16.147 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 05:09:16 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "561ab685-42f2-4920-b91d-2420296f93f0", "auth_id": "tempest-cephx-id-1431575460", "format": "json"}]: dispatch Nov 23 05:09:16 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume deauthorize, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:16 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} v 0) Nov 23 05:09:16 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:09:16 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} v 0) Nov 23 05:09:16 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch Nov 23 05:09:16 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished Nov 23 05:09:16 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume deauthorize, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:16 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "561ab685-42f2-4920-b91d-2420296f93f0", "auth_id": "tempest-cephx-id-1431575460", "format": "json"}]: dispatch Nov 23 05:09:16 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume evict, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:16 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1431575460, client_metadata.root=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997 Nov 23 05:09:16 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:09:16 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume evict, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:16 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:09:16 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch Nov 23 05:09:16 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished Nov 23 05:09:17 localhost podman[239764]: time="2025-11-23T10:09:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:09:17 localhost podman[239764]: @ - - [23/Nov/2025:10:09:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:09:17 localhost podman[239764]: @ - - [23/Nov/2025:10:09:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18770 "" "Go-http-client/1.1" Nov 23 05:09:17 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 23 05:09:17 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 23 05:09:17 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:09:17 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Nov 23 05:09:17 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 23 05:09:17 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 23 05:09:17 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:17 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 23 05:09:17 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:17 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:09:17 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:09:17 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v590: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 158 KiB/s wr, 13 op/s Nov 23 05:09:17 localhost nova_compute[280939]: 2025-11-23 10:09:17.772 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:17 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 23 05:09:17 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 23 05:09:17 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 23 05:09:18 localhost nova_compute[280939]: 2025-11-23 10:09:18.131 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:09:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:09:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:09:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:09:18 localhost podman[326636]: 2025-11-23 10:09:18.904672191 +0000 UTC m=+0.089612382 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:09:18 localhost podman[326636]: 2025-11-23 10:09:18.919796256 +0000 UTC m=+0.104736437 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm) Nov 23 05:09:18 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:09:19 localhost podman[326637]: 2025-11-23 10:09:19.007800848 +0000 UTC m=+0.189269342 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:09:19 localhost podman[326637]: 2025-11-23 10:09:19.041944089 +0000 UTC m=+0.223412573 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:09:19 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:09:19 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "317c3401-a8ad-479f-9a0e-c3503597e474", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:09:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:317c3401-a8ad-479f-9a0e-c3503597e474, vol_name:cephfs) < "" Nov 23 05:09:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/317c3401-a8ad-479f-9a0e-c3503597e474/.meta.tmp' Nov 23 05:09:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/317c3401-a8ad-479f-9a0e-c3503597e474/.meta.tmp' to config b'/volumes/_nogroup/317c3401-a8ad-479f-9a0e-c3503597e474/.meta' Nov 23 05:09:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:317c3401-a8ad-479f-9a0e-c3503597e474, vol_name:cephfs) < "" Nov 23 05:09:19 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "317c3401-a8ad-479f-9a0e-c3503597e474", "format": "json"}]: dispatch Nov 23 05:09:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:317c3401-a8ad-479f-9a0e-c3503597e474, vol_name:cephfs) < "" Nov 23 05:09:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:317c3401-a8ad-479f-9a0e-c3503597e474, vol_name:cephfs) < "" Nov 23 05:09:19 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:09:19 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:09:19 localhost nova_compute[280939]: 2025-11-23 10:09:19.466 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v591: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 219 KiB/s wr, 19 op/s Nov 23 05:09:20 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "561ab685-42f2-4920-b91d-2420296f93f0", "auth_id": "tempest-cephx-id-1431575460", "tenant_id": "4d4586d45fe545c4bf78ea7720359b10", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:09:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume authorize, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, tenant_id:4d4586d45fe545c4bf78ea7720359b10, vol_name:cephfs) < "" Nov 23 05:09:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} v 0) Nov 23 05:09:20 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:09:20 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID tempest-cephx-id-1431575460 with tenant 4d4586d45fe545c4bf78ea7720359b10 Nov 23 05:09:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:09:20 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:20 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume authorize, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, tenant_id:4d4586d45fe545c4bf78ea7720359b10, vol_name:cephfs) < "" Nov 23 05:09:20 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:09:20 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:20 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:20 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:09:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:09:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 23 05:09:20 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:09:20 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:09:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:09:20 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:20 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:09:21 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:09:21 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:21 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v592: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 852 B/s rd, 152 KiB/s wr, 14 op/s Nov 23 05:09:22 localhost nova_compute[280939]: 2025-11-23 10:09:22.808 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:23 localhost nova_compute[280939]: 2025-11-23 10:09:23.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:09:23 localhost nova_compute[280939]: 2025-11-23 10:09:23.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:09:23 localhost nova_compute[280939]: 2025-11-23 10:09:23.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 05:09:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:09:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_10:09:23 Nov 23 05:09:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 05:09:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 05:09:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['images', 'volumes', 'manila_data', 'vms', 'backups', 'manila_metadata', '.mgr'] Nov 23 05:09:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:09:23 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "561ab685-42f2-4920-b91d-2420296f93f0", "auth_id": "tempest-cephx-id-1431575460", "format": "json"}]: dispatch Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume deauthorize, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:09:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} v 0) Nov 23 05:09:23 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:09:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} v 0) Nov 23 05:09:23 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch Nov 23 05:09:23 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume deauthorize, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:23 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "561ab685-42f2-4920-b91d-2420296f93f0", "auth_id": "tempest-cephx-id-1431575460", "format": "json"}]: dispatch Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume evict, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1431575460, client_metadata.root=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997 Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1431575460, format:json, prefix:fs subvolume evict, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v593: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 852 B/s rd, 152 KiB/s wr, 14 op/s Nov 23 05:09:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 05:09:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:09:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 05:09:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:09:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Nov 23 05:09:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:09:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Nov 23 05:09:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:09:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 23 05:09:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:09:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 23 05:09:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:09:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.1810441094360693e-06 of space, bias 1.0, pg target 0.00043402777777777775 quantized to 32 (current 32) Nov 23 05:09:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:09:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0015589012772194305 of space, bias 4.0, pg target 1.2408854166666667 quantized to 16 (current 16) Nov 23 05:09:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 05:09:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:09:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 05:09:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:09:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:09:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:09:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:09:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:09:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:09:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:09:23 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "317c3401-a8ad-479f-9a0e-c3503597e474", "format": "json"}]: dispatch Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:317c3401-a8ad-479f-9a0e-c3503597e474, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:317c3401-a8ad-479f-9a0e-c3503597e474, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:23 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '317c3401-a8ad-479f-9a0e-c3503597e474' of type subvolume Nov 23 05:09:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:09:23.720+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '317c3401-a8ad-479f-9a0e-c3503597e474' of type subvolume Nov 23 05:09:23 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "317c3401-a8ad-479f-9a0e-c3503597e474", "force": true, "format": "json"}]: dispatch Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:317c3401-a8ad-479f-9a0e-c3503597e474, vol_name:cephfs) < "" Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/317c3401-a8ad-479f-9a0e-c3503597e474'' moved to trashcan Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:09:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:317c3401-a8ad-479f-9a0e-c3503597e474, vol_name:cephfs) < "" Nov 23 05:09:24 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 23 05:09:24 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 23 05:09:24 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:09:24 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Nov 23 05:09:24 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 23 05:09:24 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 23 05:09:24 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:24 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch Nov 23 05:09:24 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch Nov 23 05:09:24 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished Nov 23 05:09:24 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:09:24 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 23 05:09:24 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 23 05:09:24 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 23 05:09:24 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:24 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:09:24 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:09:24 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:24 localhost nova_compute[280939]: 2025-11-23 10:09:24.509 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v594: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 229 KiB/s wr, 21 op/s Nov 23 05:09:25 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:09:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:25 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' Nov 23 05:09:25 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta' Nov 23 05:09:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:25 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "format": "json"}]: dispatch Nov 23 05:09:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:09:25 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:09:26 localhost nova_compute[280939]: 2025-11-23 10:09:26.129 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:09:27 localhost nova_compute[280939]: 2025-11-23 10:09:27.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:09:27 localhost nova_compute[280939]: 2025-11-23 10:09:27.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:09:27 localhost nova_compute[280939]: 2025-11-23 10:09:27.150 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:09:27 localhost nova_compute[280939]: 2025-11-23 10:09:27.150 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:09:27 localhost nova_compute[280939]: 2025-11-23 10:09:27.151 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:09:27 localhost nova_compute[280939]: 2025-11-23 10:09:27.151 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 05:09:27 localhost nova_compute[280939]: 2025-11-23 10:09:27.151 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:09:27 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "561ab685-42f2-4920-b91d-2420296f93f0", "format": "json"}]: dispatch Nov 23 05:09:27 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:561ab685-42f2-4920-b91d-2420296f93f0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:27 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:561ab685-42f2-4920-b91d-2420296f93f0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:27 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:09:27.185+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '561ab685-42f2-4920-b91d-2420296f93f0' of type subvolume Nov 23 05:09:27 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '561ab685-42f2-4920-b91d-2420296f93f0' of type subvolume Nov 23 05:09:27 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "561ab685-42f2-4920-b91d-2420296f93f0", "force": true, "format": "json"}]: dispatch Nov 23 05:09:27 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:27 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0'' moved to trashcan Nov 23 05:09:27 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:09:27 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:561ab685-42f2-4920-b91d-2420296f93f0, vol_name:cephfs) < "" Nov 23 05:09:27 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "r", "format": "json"}]: dispatch Nov 23 05:09:27 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:09:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 23 05:09:27 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:09:27 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID alice bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:09:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:09:27 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:27 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:09:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:27 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v595: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 139 KiB/s wr, 12 op/s Nov 23 05:09:27 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:09:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:09:27 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1279803891' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:09:27 localhost nova_compute[280939]: 2025-11-23 10:09:27.618 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:09:27 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "68ff3efc-3173-4e54-ae87-43e935ed3d0f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:09:27 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:68ff3efc-3173-4e54-ae87-43e935ed3d0f, vol_name:cephfs) < "" Nov 23 05:09:27 localhost nova_compute[280939]: 2025-11-23 10:09:27.843 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:27 localhost nova_compute[280939]: 2025-11-23 10:09:27.856 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 05:09:27 localhost nova_compute[280939]: 2025-11-23 10:09:27.858 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11408MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 05:09:27 localhost nova_compute[280939]: 2025-11-23 10:09:27.858 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:09:27 localhost nova_compute[280939]: 2025-11-23 10:09:27.859 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:09:27 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/68ff3efc-3173-4e54-ae87-43e935ed3d0f/.meta.tmp' Nov 23 05:09:27 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/68ff3efc-3173-4e54-ae87-43e935ed3d0f/.meta.tmp' to config b'/volumes/_nogroup/68ff3efc-3173-4e54-ae87-43e935ed3d0f/.meta' Nov 23 05:09:27 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:68ff3efc-3173-4e54-ae87-43e935ed3d0f, vol_name:cephfs) < "" Nov 23 05:09:27 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "68ff3efc-3173-4e54-ae87-43e935ed3d0f", "format": "json"}]: dispatch Nov 23 05:09:27 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:68ff3efc-3173-4e54-ae87-43e935ed3d0f, vol_name:cephfs) < "" Nov 23 05:09:27 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:68ff3efc-3173-4e54-ae87-43e935ed3d0f, vol_name:cephfs) < "" Nov 23 05:09:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:09:27 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:09:28 localhost nova_compute[280939]: 2025-11-23 10:09:28.095 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 05:09:28 localhost nova_compute[280939]: 2025-11-23 10:09:28.096 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 05:09:28 localhost nova_compute[280939]: 2025-11-23 10:09:28.126 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Refreshing inventories for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 23 05:09:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:09:28 localhost nova_compute[280939]: 2025-11-23 10:09:28.311 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Updating ProviderTree inventory for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 23 05:09:28 localhost nova_compute[280939]: 2025-11-23 10:09:28.313 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Updating inventory in ProviderTree for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 23 05:09:28 localhost nova_compute[280939]: 2025-11-23 10:09:28.339 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Refreshing aggregate associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 23 05:09:28 localhost nova_compute[280939]: 2025-11-23 10:09:28.381 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Refreshing trait associations for resource provider c90c5769-42ab-40e9-92fc-3d82b4e96052, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 23 05:09:28 localhost nova_compute[280939]: 2025-11-23 10:09:28.404 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:09:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:09:28 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2081433797' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:09:28 localhost nova_compute[280939]: 2025-11-23 10:09:28.857 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:09:28 localhost nova_compute[280939]: 2025-11-23 10:09:28.864 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 05:09:28 localhost nova_compute[280939]: 2025-11-23 10:09:28.883 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 05:09:28 localhost nova_compute[280939]: 2025-11-23 10:09:28.885 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 05:09:28 localhost nova_compute[280939]: 2025-11-23 10:09:28.885 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:09:28 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "05a84a88-73a6-4e8e-8d02-fae54fb3ac71", "format": "json"}]: dispatch Nov 23 05:09:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:05a84a88-73a6-4e8e-8d02-fae54fb3ac71, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:05a84a88-73a6-4e8e-8d02-fae54fb3ac71, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:29 localhost nova_compute[280939]: 2025-11-23 10:09:29.553 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v596: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 852 B/s rd, 196 KiB/s wr, 17 op/s Nov 23 05:09:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:09:30 localhost podman[326723]: 2025-11-23 10:09:30.309614186 +0000 UTC m=+0.086588038 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 23 05:09:30 localhost podman[326723]: 2025-11-23 10:09:30.319395378 +0000 UTC m=+0.096369230 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:09:30 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:09:30 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 23 05:09:30 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 23 05:09:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:09:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Nov 23 05:09:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 23 05:09:30 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 23 05:09:30 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:30 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 23 05:09:30 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:30 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:09:30 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:09:30 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v597: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 135 KiB/s wr, 11 op/s Nov 23 05:09:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 23 05:09:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 23 05:09:31 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 23 05:09:32 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "36ea254b-85e6-42e7-9ff4-edd1917fb520", "format": "json"}]: dispatch Nov 23 05:09:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:36ea254b-85e6-42e7-9ff4-edd1917fb520, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:36ea254b-85e6-42e7-9ff4-edd1917fb520, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:32 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "68ff3efc-3173-4e54-ae87-43e935ed3d0f", "format": "json"}]: dispatch Nov 23 05:09:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:68ff3efc-3173-4e54-ae87-43e935ed3d0f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:68ff3efc-3173-4e54-ae87-43e935ed3d0f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:32 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:09:32.422+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '68ff3efc-3173-4e54-ae87-43e935ed3d0f' of type subvolume Nov 23 05:09:32 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '68ff3efc-3173-4e54-ae87-43e935ed3d0f' of type subvolume Nov 23 05:09:32 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "68ff3efc-3173-4e54-ae87-43e935ed3d0f", "force": true, "format": "json"}]: dispatch Nov 23 05:09:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:68ff3efc-3173-4e54-ae87-43e935ed3d0f, vol_name:cephfs) < "" Nov 23 05:09:32 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/68ff3efc-3173-4e54-ae87-43e935ed3d0f'' moved to trashcan Nov 23 05:09:32 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:09:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:68ff3efc-3173-4e54-ae87-43e935ed3d0f, vol_name:cephfs) < "" Nov 23 05:09:32 localhost nova_compute[280939]: 2025-11-23 10:09:32.866 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:09:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v598: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 135 KiB/s wr, 11 op/s Nov 23 05:09:33 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:09:33 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:09:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Nov 23 05:09:33 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 23 05:09:33 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID bob with tenant 836ae7fa48674096a03624b407ebbbc9 Nov 23 05:09:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:09:34 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:34 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:34 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:09:34 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 23 05:09:34 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:09:34 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:09:34 localhost nova_compute[280939]: 2025-11-23 10:09:34.583 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v599: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 188 KiB/s wr, 15 op/s Nov 23 05:09:35 localhost nova_compute[280939]: 2025-11-23 10:09:35.881 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:09:36 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "36ea254b-85e6-42e7-9ff4-edd1917fb520_39664d2d-8ed1-4c9f-9b99-02052c6cf2a5", "force": true, "format": "json"}]: dispatch Nov 23 05:09:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:36ea254b-85e6-42e7-9ff4-edd1917fb520_39664d2d-8ed1-4c9f-9b99-02052c6cf2a5, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:36 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' Nov 23 05:09:36 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta' Nov 23 05:09:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:36ea254b-85e6-42e7-9ff4-edd1917fb520_39664d2d-8ed1-4c9f-9b99-02052c6cf2a5, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:36 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "36ea254b-85e6-42e7-9ff4-edd1917fb520", "force": true, "format": "json"}]: dispatch Nov 23 05:09:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:36ea254b-85e6-42e7-9ff4-edd1917fb520, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:36 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' Nov 23 05:09:36 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta' Nov 23 05:09:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:36ea254b-85e6-42e7-9ff4-edd1917fb520, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:36 localhost openstack_network_exporter[241732]: ERROR 10:09:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:09:36 localhost openstack_network_exporter[241732]: ERROR 10:09:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:09:36 localhost openstack_network_exporter[241732]: ERROR 10:09:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:09:36 localhost openstack_network_exporter[241732]: ERROR 10:09:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:09:36 localhost openstack_network_exporter[241732]: Nov 23 05:09:36 localhost openstack_network_exporter[241732]: ERROR 10:09:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:09:36 localhost openstack_network_exporter[241732]: Nov 23 05:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:09:36 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "185a1676-923a-42e3-bf21-4993a35dc3a8", "format": "json"}]: dispatch Nov 23 05:09:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:185a1676-923a-42e3-bf21-4993a35dc3a8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:185a1676-923a-42e3-bf21-4993a35dc3a8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:36 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:09:36.916+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '185a1676-923a-42e3-bf21-4993a35dc3a8' of type subvolume Nov 23 05:09:36 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '185a1676-923a-42e3-bf21-4993a35dc3a8' of type subvolume Nov 23 05:09:36 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "185a1676-923a-42e3-bf21-4993a35dc3a8", "force": true, "format": "json"}]: dispatch Nov 23 05:09:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:185a1676-923a-42e3-bf21-4993a35dc3a8, vol_name:cephfs) < "" Nov 23 05:09:36 localhost podman[326744]: 2025-11-23 10:09:36.921115523 +0000 UTC m=+0.099946739 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 23 05:09:36 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/185a1676-923a-42e3-bf21-4993a35dc3a8'' moved to trashcan Nov 23 05:09:36 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:09:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:185a1676-923a-42e3-bf21-4993a35dc3a8, vol_name:cephfs) < "" Nov 23 05:09:36 localhost podman[326744]: 2025-11-23 10:09:36.968246835 +0000 UTC m=+0.147078091 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3) Nov 23 05:09:36 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:09:36 localhost podman[326742]: 2025-11-23 10:09:36.897516507 +0000 UTC m=+0.082682459 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 23 05:09:37 localhost podman[326743]: 2025-11-23 10:09:36.970431603 +0000 UTC m=+0.153279103 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 05:09:37 localhost podman[326742]: 2025-11-23 10:09:37.031529355 +0000 UTC m=+0.216695297 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Nov 23 05:09:37 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:09:37 localhost podman[326743]: 2025-11-23 10:09:37.050769668 +0000 UTC m=+0.233617148 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:09:37 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:09:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v600: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 111 KiB/s wr, 9 op/s Nov 23 05:09:37 localhost nova_compute[280939]: 2025-11-23 10:09:37.890 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:09:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e257 do_prune osdmap full prune enabled Nov 23 05:09:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e258 e258: 6 total, 6 up, 6 in Nov 23 05:09:38 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e258: 6 total, 6 up, 6 in Nov 23 05:09:39 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6734401e-5573-4709-85e4-c69140f6c86e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:09:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6734401e-5573-4709-85e4-c69140f6c86e, vol_name:cephfs) < "" Nov 23 05:09:39 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6734401e-5573-4709-85e4-c69140f6c86e/.meta.tmp' Nov 23 05:09:39 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6734401e-5573-4709-85e4-c69140f6c86e/.meta.tmp' to config b'/volumes/_nogroup/6734401e-5573-4709-85e4-c69140f6c86e/.meta' Nov 23 05:09:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6734401e-5573-4709-85e4-c69140f6c86e, vol_name:cephfs) < "" Nov 23 05:09:39 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6734401e-5573-4709-85e4-c69140f6c86e", "format": "json"}]: dispatch Nov 23 05:09:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6734401e-5573-4709-85e4-c69140f6c86e, vol_name:cephfs) < "" Nov 23 05:09:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6734401e-5573-4709-85e4-c69140f6c86e, vol_name:cephfs) < "" Nov 23 05:09:39 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:09:39 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:09:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v602: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 613 B/s rd, 112 KiB/s wr, 9 op/s Nov 23 05:09:39 localhost nova_compute[280939]: 2025-11-23 10:09:39.612 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:39 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "d5e337f3-9c6d-416c-9ea4-ca3cff9c20ec", "format": "json"}]: dispatch Nov 23 05:09:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d5e337f3-9c6d-416c-9ea4-ca3cff9c20ec, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d5e337f3-9c6d-416c-9ea4-ca3cff9c20ec, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v603: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 613 B/s rd, 112 KiB/s wr, 9 op/s Nov 23 05:09:42 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6734401e-5573-4709-85e4-c69140f6c86e", "auth_id": "bob", "tenant_id": "836ae7fa48674096a03624b407ebbbc9", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:09:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:6734401e-5573-4709-85e4-c69140f6c86e, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:09:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Nov 23 05:09:42 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 23 05:09:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6,allow rw path=/volumes/_nogroup/6734401e-5573-4709-85e4-c69140f6c86e/7aacd64e-4c3a-45c6-ac2d-a36c3c0cca66", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4,allow rw pool=manila_data namespace=fsvolumens_6734401e-5573-4709-85e4-c69140f6c86e"]} v 0) Nov 23 05:09:42 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6,allow rw path=/volumes/_nogroup/6734401e-5573-4709-85e4-c69140f6c86e/7aacd64e-4c3a-45c6-ac2d-a36c3c0cca66", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4,allow rw pool=manila_data namespace=fsvolumens_6734401e-5573-4709-85e4-c69140f6c86e"]} : dispatch Nov 23 05:09:42 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6,allow rw path=/volumes/_nogroup/6734401e-5573-4709-85e4-c69140f6c86e/7aacd64e-4c3a-45c6-ac2d-a36c3c0cca66", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4,allow rw pool=manila_data namespace=fsvolumens_6734401e-5573-4709-85e4-c69140f6c86e"]}]': finished Nov 23 05:09:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Nov 23 05:09:42 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 23 05:09:42 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 23 05:09:42 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6,allow rw path=/volumes/_nogroup/6734401e-5573-4709-85e4-c69140f6c86e/7aacd64e-4c3a-45c6-ac2d-a36c3c0cca66", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4,allow rw pool=manila_data namespace=fsvolumens_6734401e-5573-4709-85e4-c69140f6c86e"]} : dispatch Nov 23 05:09:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:6734401e-5573-4709-85e4-c69140f6c86e, tenant_id:836ae7fa48674096a03624b407ebbbc9, vol_name:cephfs) < "" Nov 23 05:09:42 localhost nova_compute[280939]: 2025-11-23 10:09:42.925 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:09:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e258 do_prune osdmap full prune enabled Nov 23 05:09:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e259 e259: 6 total, 6 up, 6 in Nov 23 05:09:43 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e259: 6 total, 6 up, 6 in Nov 23 05:09:43 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6,allow rw path=/volumes/_nogroup/6734401e-5573-4709-85e4-c69140f6c86e/7aacd64e-4c3a-45c6-ac2d-a36c3c0cca66", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4,allow rw pool=manila_data namespace=fsvolumens_6734401e-5573-4709-85e4-c69140f6c86e"]}]': finished Nov 23 05:09:43 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 23 05:09:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v605: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 61 KiB/s wr, 4 op/s Nov 23 05:09:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:09:43 localhost podman[326805]: 2025-11-23 10:09:43.894045095 +0000 UTC m=+0.077148397 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm) Nov 23 05:09:43 localhost podman[326805]: 2025-11-23 10:09:43.932036346 +0000 UTC m=+0.115139648 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=) Nov 23 05:09:43 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:09:43 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "d5e337f3-9c6d-416c-9ea4-ca3cff9c20ec_3554bede-d6bc-42a0-880f-3aeb45030174", "force": true, "format": "json"}]: dispatch Nov 23 05:09:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d5e337f3-9c6d-416c-9ea4-ca3cff9c20ec_3554bede-d6bc-42a0-880f-3aeb45030174, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:43 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' Nov 23 05:09:43 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta' Nov 23 05:09:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d5e337f3-9c6d-416c-9ea4-ca3cff9c20ec_3554bede-d6bc-42a0-880f-3aeb45030174, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:44 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "d5e337f3-9c6d-416c-9ea4-ca3cff9c20ec", "force": true, "format": "json"}]: dispatch Nov 23 05:09:44 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d5e337f3-9c6d-416c-9ea4-ca3cff9c20ec, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:44 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' Nov 23 05:09:44 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta' Nov 23 05:09:44 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d5e337f3-9c6d-416c-9ea4-ca3cff9c20ec, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:44 localhost nova_compute[280939]: 2025-11-23 10:09:44.650 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v606: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 639 B/s rd, 113 KiB/s wr, 9 op/s Nov 23 05:09:45 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6734401e-5573-4709-85e4-c69140f6c86e", "auth_id": "bob", "format": "json"}]: dispatch Nov 23 05:09:45 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:6734401e-5573-4709-85e4-c69140f6c86e, vol_name:cephfs) < "" Nov 23 05:09:45 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Nov 23 05:09:45 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 23 05:09:45 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4"]} v 0) Nov 23 05:09:45 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4"]} : dispatch Nov 23 05:09:45 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4"]}]': finished Nov 23 05:09:45 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:6734401e-5573-4709-85e4-c69140f6c86e, vol_name:cephfs) < "" Nov 23 05:09:45 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6734401e-5573-4709-85e4-c69140f6c86e", "auth_id": "bob", "format": "json"}]: dispatch Nov 23 05:09:45 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:6734401e-5573-4709-85e4-c69140f6c86e, vol_name:cephfs) < "" Nov 23 05:09:45 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/6734401e-5573-4709-85e4-c69140f6c86e/7aacd64e-4c3a-45c6-ac2d-a36c3c0cca66 Nov 23 05:09:45 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:09:45 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:6734401e-5573-4709-85e4-c69140f6c86e, vol_name:cephfs) < "" Nov 23 05:09:46 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 23 05:09:46 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4"]} : dispatch Nov 23 05:09:46 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4"]}]': finished Nov 23 05:09:47 localhost podman[239764]: time="2025-11-23T10:09:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:09:47 localhost podman[239764]: @ - - [23/Nov/2025:10:09:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:09:47 localhost podman[239764]: @ - - [23/Nov/2025:10:09:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18771 "" "Go-http-client/1.1" Nov 23 05:09:47 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "44e59f1f-b497-47bb-85b6-28f51fe17e15", "format": "json"}]: dispatch Nov 23 05:09:47 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:44e59f1f-b497-47bb-85b6-28f51fe17e15, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:47 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:44e59f1f-b497-47bb-85b6-28f51fe17e15, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v607: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 555 B/s rd, 99 KiB/s wr, 7 op/s Nov 23 05:09:47 localhost nova_compute[280939]: 2025-11-23 10:09:47.964 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:09:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e259 do_prune osdmap full prune enabled Nov 23 05:09:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e260 e260: 6 total, 6 up, 6 in Nov 23 05:09:48 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e260: 6 total, 6 up, 6 in Nov 23 05:09:49 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "bob", "format": "json"}]: dispatch Nov 23 05:09:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Nov 23 05:09:49 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 23 05:09:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0) Nov 23 05:09:49 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Nov 23 05:09:49 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Nov 23 05:09:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:49 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "auth_id": "bob", "format": "json"}]: dispatch Nov 23 05:09:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:49 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6 Nov 23 05:09:49 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:09:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v609: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 121 KiB/s wr, 9 op/s Nov 23 05:09:49 localhost nova_compute[280939]: 2025-11-23 10:09:49.679 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:09:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:09:49 localhost podman[326845]: 2025-11-23 10:09:49.867233113 +0000 UTC m=+0.093465120 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm) Nov 23 05:09:49 localhost podman[326845]: 2025-11-23 10:09:49.906336287 +0000 UTC m=+0.132568274 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Nov 23 05:09:49 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:09:49 localhost podman[326847]: 2025-11-23 10:09:49.925833157 +0000 UTC m=+0.149982260 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 05:09:49 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "60134a7e-4ba8-46c5-95a6-e7b9f66ce444", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:09:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:60134a7e-4ba8-46c5-95a6-e7b9f66ce444, vol_name:cephfs) < "" Nov 23 05:09:49 localhost podman[326847]: 2025-11-23 10:09:49.938485928 +0000 UTC m=+0.162635061 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:09:49 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:09:49 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/60134a7e-4ba8-46c5-95a6-e7b9f66ce444/.meta.tmp' Nov 23 05:09:50 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/60134a7e-4ba8-46c5-95a6-e7b9f66ce444/.meta.tmp' to config b'/volumes/_nogroup/60134a7e-4ba8-46c5-95a6-e7b9f66ce444/.meta' Nov 23 05:09:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:60134a7e-4ba8-46c5-95a6-e7b9f66ce444, vol_name:cephfs) < "" Nov 23 05:09:50 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "60134a7e-4ba8-46c5-95a6-e7b9f66ce444", "format": "json"}]: dispatch Nov 23 05:09:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:60134a7e-4ba8-46c5-95a6-e7b9f66ce444, vol_name:cephfs) < "" Nov 23 05:09:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:60134a7e-4ba8-46c5-95a6-e7b9f66ce444, vol_name:cephfs) < "" Nov 23 05:09:50 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:09:50 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:09:50 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 23 05:09:50 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Nov 23 05:09:50 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Nov 23 05:09:50 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 05:09:50 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 05:09:50 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 05:09:50 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:09:50 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 05:09:50 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:09:50 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev 72da135c-5c9d-4118-86f1-2393a96b44a6 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:09:50 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev 72da135c-5c9d-4118-86f1-2393a96b44a6 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:09:50 localhost ceph-mgr[286671]: [progress INFO root] Completed event 72da135c-5c9d-4118-86f1-2393a96b44a6 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 05:09:50 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 05:09:50 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 05:09:51 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "44e59f1f-b497-47bb-85b6-28f51fe17e15_6955baeb-e48a-4415-962c-c531e9f0e365", "force": true, "format": "json"}]: dispatch Nov 23 05:09:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:44e59f1f-b497-47bb-85b6-28f51fe17e15_6955baeb-e48a-4415-962c-c531e9f0e365, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' Nov 23 05:09:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta' Nov 23 05:09:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:44e59f1f-b497-47bb-85b6-28f51fe17e15_6955baeb-e48a-4415-962c-c531e9f0e365, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:51 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "44e59f1f-b497-47bb-85b6-28f51fe17e15", "force": true, "format": "json"}]: dispatch Nov 23 05:09:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:44e59f1f-b497-47bb-85b6-28f51fe17e15, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' Nov 23 05:09:51 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta' Nov 23 05:09:51 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:44e59f1f-b497-47bb-85b6-28f51fe17e15, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:51 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:09:51 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:09:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v610: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 370 B/s rd, 117 KiB/s wr, 9 op/s Nov 23 05:09:53 localhost nova_compute[280939]: 2025-11-23 10:09:53.006 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:53 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6734401e-5573-4709-85e4-c69140f6c86e", "format": "json"}]: dispatch Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6734401e-5573-4709-85e4-c69140f6c86e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6734401e-5573-4709-85e4-c69140f6c86e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:53 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:09:53.097+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6734401e-5573-4709-85e4-c69140f6c86e' of type subvolume Nov 23 05:09:53 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6734401e-5573-4709-85e4-c69140f6c86e' of type subvolume Nov 23 05:09:53 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6734401e-5573-4709-85e4-c69140f6c86e", "force": true, "format": "json"}]: dispatch Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6734401e-5573-4709-85e4-c69140f6c86e, vol_name:cephfs) < "" Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6734401e-5573-4709-85e4-c69140f6c86e'' moved to trashcan Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6734401e-5573-4709-85e4-c69140f6c86e, vol_name:cephfs) < "" Nov 23 05:09:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:09:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e260 do_prune osdmap full prune enabled Nov 23 05:09:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e261 e261: 6 total, 6 up, 6 in Nov 23 05:09:53 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e261: 6 total, 6 up, 6 in Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:09:53 localhost nova_compute[280939]: 2025-11-23 10:09:53.514 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:53 localhost ovn_metadata_agent[159410]: 2025-11-23 10:09:53.513 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:09:53 localhost ovn_metadata_agent[159410]: 2025-11-23 10:09:53.514 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 05:09:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 68 KiB/s wr, 5 op/s Nov 23 05:09:53 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 05:09:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 05:09:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:09:53 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "50ef3b94-fa58-4aa7-8f5f-5b4e06f993be", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:50ef3b94-fa58-4aa7-8f5f-5b4e06f993be, vol_name:cephfs) < "" Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/50ef3b94-fa58-4aa7-8f5f-5b4e06f993be/.meta.tmp' Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/50ef3b94-fa58-4aa7-8f5f-5b4e06f993be/.meta.tmp' to config b'/volumes/_nogroup/50ef3b94-fa58-4aa7-8f5f-5b4e06f993be/.meta' Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:50ef3b94-fa58-4aa7-8f5f-5b4e06f993be, vol_name:cephfs) < "" Nov 23 05:09:53 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "50ef3b94-fa58-4aa7-8f5f-5b4e06f993be", "format": "json"}]: dispatch Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:50ef3b94-fa58-4aa7-8f5f-5b4e06f993be, vol_name:cephfs) < "" Nov 23 05:09:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:50ef3b94-fa58-4aa7-8f5f-5b4e06f993be, vol_name:cephfs) < "" Nov 23 05:09:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:09:53 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:09:54 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "3d69f3d9-93bd-4c9d-9fd9-2da996580ba0", "format": "json"}]: dispatch Nov 23 05:09:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3d69f3d9-93bd-4c9d-9fd9-2da996580ba0, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:54 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3d69f3d9-93bd-4c9d-9fd9-2da996580ba0, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:54 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:09:54 localhost nova_compute[280939]: 2025-11-23 10:09:54.714 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v613: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 639 B/s rd, 149 KiB/s wr, 11 op/s Nov 23 05:09:56 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "format": "json"}]: dispatch Nov 23 05:09:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:09:56 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c01bfd1f-c2a4-4505-b9fb-5784100d38f4' of type subvolume Nov 23 05:09:56 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "force": true, "format": "json"}]: dispatch Nov 23 05:09:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:56 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4'' moved to trashcan Nov 23 05:09:56 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:09:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c01bfd1f-c2a4-4505-b9fb-5784100d38f4, vol_name:cephfs) < "" Nov 23 05:09:56 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:09:56.444+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c01bfd1f-c2a4-4505-b9fb-5784100d38f4' of type subvolume Nov 23 05:09:57 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "128bea3f-de1e-4023-85ec-3b82c2e536a6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:09:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:128bea3f-de1e-4023-85ec-3b82c2e536a6, vol_name:cephfs) < "" Nov 23 05:09:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/128bea3f-de1e-4023-85ec-3b82c2e536a6/.meta.tmp' Nov 23 05:09:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/128bea3f-de1e-4023-85ec-3b82c2e536a6/.meta.tmp' to config b'/volumes/_nogroup/128bea3f-de1e-4023-85ec-3b82c2e536a6/.meta' Nov 23 05:09:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:128bea3f-de1e-4023-85ec-3b82c2e536a6, vol_name:cephfs) < "" Nov 23 05:09:57 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "128bea3f-de1e-4023-85ec-3b82c2e536a6", "format": "json"}]: dispatch Nov 23 05:09:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:128bea3f-de1e-4023-85ec-3b82c2e536a6, vol_name:cephfs) < "" Nov 23 05:09:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:128bea3f-de1e-4023-85ec-3b82c2e536a6, vol_name:cephfs) < "" Nov 23 05:09:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:09:57 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:09:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v614: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 567 B/s rd, 133 KiB/s wr, 10 op/s Nov 23 05:09:58 localhost nova_compute[280939]: 2025-11-23 10:09:58.030 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:09:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:09:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e261 do_prune osdmap full prune enabled Nov 23 05:09:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e262 e262: 6 total, 6 up, 6 in Nov 23 05:09:58 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e262: 6 total, 6 up, 6 in Nov 23 05:09:59 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "3d69f3d9-93bd-4c9d-9fd9-2da996580ba0_3667dc95-9604-47c1-b81e-3a48d1dffa79", "force": true, "format": "json"}]: dispatch Nov 23 05:09:59 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3d69f3d9-93bd-4c9d-9fd9-2da996580ba0_3667dc95-9604-47c1-b81e-3a48d1dffa79, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:59 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' Nov 23 05:09:59 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta' Nov 23 05:09:59 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3d69f3d9-93bd-4c9d-9fd9-2da996580ba0_3667dc95-9604-47c1-b81e-3a48d1dffa79, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:59 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "3d69f3d9-93bd-4c9d-9fd9-2da996580ba0", "force": true, "format": "json"}]: dispatch Nov 23 05:09:59 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3d69f3d9-93bd-4c9d-9fd9-2da996580ba0, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:59 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' Nov 23 05:09:59 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta' Nov 23 05:09:59 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3d69f3d9-93bd-4c9d-9fd9-2da996580ba0, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:09:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v616: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 895 B/s rd, 155 KiB/s wr, 11 op/s Nov 23 05:09:59 localhost nova_compute[280939]: 2025-11-23 10:09:59.750 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:00 localhost ceph-mon[293353]: log_channel(cluster) log [INF] : overall HEALTH_OK Nov 23 05:10:00 localhost ceph-mon[293353]: overall HEALTH_OK Nov 23 05:10:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:10:00 localhost podman[326957]: 2025-11-23 10:10:00.890711746 +0000 UTC m=+0.081127340 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Nov 23 05:10:00 localhost podman[326957]: 2025-11-23 10:10:00.924360753 +0000 UTC m=+0.114776327 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 23 05:10:00 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:10:00 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0f05cf61-7702-4ec2-9212-d2c2293260bd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:10:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0f05cf61-7702-4ec2-9212-d2c2293260bd, vol_name:cephfs) < "" Nov 23 05:10:01 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0f05cf61-7702-4ec2-9212-d2c2293260bd/.meta.tmp' Nov 23 05:10:01 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0f05cf61-7702-4ec2-9212-d2c2293260bd/.meta.tmp' to config b'/volumes/_nogroup/0f05cf61-7702-4ec2-9212-d2c2293260bd/.meta' Nov 23 05:10:01 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0f05cf61-7702-4ec2-9212-d2c2293260bd, vol_name:cephfs) < "" Nov 23 05:10:01 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0f05cf61-7702-4ec2-9212-d2c2293260bd", "format": "json"}]: dispatch Nov 23 05:10:01 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0f05cf61-7702-4ec2-9212-d2c2293260bd, vol_name:cephfs) < "" Nov 23 05:10:01 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0f05cf61-7702-4ec2-9212-d2c2293260bd, vol_name:cephfs) < "" Nov 23 05:10:01 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:10:01 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:10:01 localhost ovn_metadata_agent[159410]: 2025-11-23 10:10:01.516 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 05:10:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v617: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 865 B/s rd, 150 KiB/s wr, 10 op/s Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0. Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:10:02.433644) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67 Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892602433752, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 2519, "num_deletes": 256, "total_data_size": 2057175, "memory_usage": 2102064, "flush_reason": "Manual Compaction"} Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892602445514, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 1997751, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36338, "largest_seqno": 38856, "table_properties": {"data_size": 1987130, "index_size": 6490, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27052, "raw_average_key_size": 22, "raw_value_size": 1963988, "raw_average_value_size": 1608, "num_data_blocks": 281, "num_entries": 1221, "num_filter_entries": 1221, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892480, "oldest_key_time": 1763892480, "file_creation_time": 1763892602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}} Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 11887 microseconds, and 5833 cpu microseconds. Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:10:02.445566) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 1997751 bytes OK Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:10:02.445589) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:10:02.448750) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:10:02.448770) EVENT_LOG_v1 {"time_micros": 1763892602448764, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:10:02.448792) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2045718, prev total WAL file size 2045718, number of live WAL files 2. Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:10:02.449561) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133303532' seq:72057594037927935, type:22 .. '7061786F73003133333034' seq:0, type:0; will stop at (end) Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(1950KB)], [66(17MB)] Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892602449638, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 20453017, "oldest_snapshot_seqno": -1} Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 14367 keys, 18624647 bytes, temperature: kUnknown Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892602543064, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 18624647, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18540026, "index_size": 47592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35973, "raw_key_size": 385516, "raw_average_key_size": 26, "raw_value_size": 18293179, "raw_average_value_size": 1273, "num_data_blocks": 1778, "num_entries": 14367, "num_filter_entries": 14367, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763892602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}} Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:10:02.543606) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 18624647 bytes Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:10:02.545565) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 218.7 rd, 199.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 17.6 +0.0 blob) out(17.8 +0.0 blob), read-write-amplify(19.6) write-amplify(9.3) OK, records in: 14900, records dropped: 533 output_compression: NoCompression Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:10:02.545595) EVENT_LOG_v1 {"time_micros": 1763892602545582, "job": 40, "event": "compaction_finished", "compaction_time_micros": 93539, "compaction_time_cpu_micros": 55772, "output_level": 6, "num_output_files": 1, "total_output_size": 18624647, "num_input_records": 14900, "num_output_records": 14367, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892602546048, "job": 40, "event": "table_file_deletion", "file_number": 68} Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892602548699, "job": 40, "event": "table_file_deletion", "file_number": 66} Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:10:02.449465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:10:02.548796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:10:02.548804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:10:02.548808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:10:02.548811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:10:02 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:10:02.548814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:10:03 localhost nova_compute[280939]: 2025-11-23 10:10:03.061 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:10:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e262 do_prune osdmap full prune enabled Nov 23 05:10:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e263 e263: 6 total, 6 up, 6 in Nov 23 05:10:03 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e263: 6 total, 6 up, 6 in Nov 23 05:10:03 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "ff791c0e-f88c-4daa-a7d3-3098e61f94ca", "format": "json"}]: dispatch Nov 23 05:10:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ff791c0e-f88c-4daa-a7d3-3098e61f94ca, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:10:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v619: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 74 KiB/s wr, 5 op/s Nov 23 05:10:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ff791c0e-f88c-4daa-a7d3-3098e61f94ca, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:10:04 localhost nova_compute[280939]: 2025-11-23 10:10:04.792 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:05 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0f05cf61-7702-4ec2-9212-d2c2293260bd", "format": "json"}]: dispatch Nov 23 05:10:05 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0f05cf61-7702-4ec2-9212-d2c2293260bd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:05 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0f05cf61-7702-4ec2-9212-d2c2293260bd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:05 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:05.128+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0f05cf61-7702-4ec2-9212-d2c2293260bd' of type subvolume Nov 23 05:10:05 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0f05cf61-7702-4ec2-9212-d2c2293260bd' of type subvolume Nov 23 05:10:05 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0f05cf61-7702-4ec2-9212-d2c2293260bd", "force": true, "format": "json"}]: dispatch Nov 23 05:10:05 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0f05cf61-7702-4ec2-9212-d2c2293260bd, vol_name:cephfs) < "" Nov 23 05:10:05 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0f05cf61-7702-4ec2-9212-d2c2293260bd'' moved to trashcan Nov 23 05:10:05 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:10:05 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0f05cf61-7702-4ec2-9212-d2c2293260bd, vol_name:cephfs) < "" Nov 23 05:10:05 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Nov 23 05:10:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v620: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 112 KiB/s wr, 7 op/s Nov 23 05:10:06 localhost openstack_network_exporter[241732]: ERROR 10:10:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:10:06 localhost openstack_network_exporter[241732]: ERROR 10:10:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:10:06 localhost openstack_network_exporter[241732]: ERROR 10:10:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:10:06 localhost openstack_network_exporter[241732]: ERROR 10:10:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:10:06 localhost openstack_network_exporter[241732]: Nov 23 05:10:06 localhost openstack_network_exporter[241732]: ERROR 10:10:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:10:06 localhost openstack_network_exporter[241732]: Nov 23 05:10:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v621: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 662 B/s rd, 97 KiB/s wr, 6 op/s Nov 23 05:10:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:10:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:10:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:10:07 localhost podman[326975]: 2025-11-23 10:10:07.900403252 +0000 UTC m=+0.089597004 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:10:07 localhost podman[326975]: 2025-11-23 10:10:07.938347851 +0000 UTC m=+0.127541573 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd) Nov 23 05:10:07 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:10:07 localhost podman[326977]: 2025-11-23 10:10:07.955619307 +0000 UTC m=+0.138677565 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 23 05:10:08 localhost podman[326976]: 2025-11-23 10:10:08.029238966 +0000 UTC m=+0.214607328 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:10:08 localhost podman[326976]: 2025-11-23 10:10:08.042328809 +0000 UTC m=+0.227697211 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 05:10:08 localhost podman[326977]: 2025-11-23 10:10:08.048374571 +0000 UTC m=+0.231432829 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:10:08 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:10:08 localhost nova_compute[280939]: 2025-11-23 10:10:08.062 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:08 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:10:08 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "73463559-de38-4e65-91fe-e256e1993ef1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:73463559-de38-4e65-91fe-e256e1993ef1, vol_name:cephfs) < "" Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/73463559-de38-4e65-91fe-e256e1993ef1/.meta.tmp' Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/73463559-de38-4e65-91fe-e256e1993ef1/.meta.tmp' to config b'/volumes/_nogroup/73463559-de38-4e65-91fe-e256e1993ef1/.meta' Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:73463559-de38-4e65-91fe-e256e1993ef1, vol_name:cephfs) < "" Nov 23 05:10:08 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "73463559-de38-4e65-91fe-e256e1993ef1", "format": "json"}]: dispatch Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:73463559-de38-4e65-91fe-e256e1993ef1, vol_name:cephfs) < "" Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:73463559-de38-4e65-91fe-e256e1993ef1, vol_name:cephfs) < "" Nov 23 05:10:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:10:08 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:10:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:10:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e263 do_prune osdmap full prune enabled Nov 23 05:10:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e264 e264: 6 total, 6 up, 6 in Nov 23 05:10:08 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e264: 6 total, 6 up, 6 in Nov 23 05:10:08 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "128bea3f-de1e-4023-85ec-3b82c2e536a6", "format": "json"}]: dispatch Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:128bea3f-de1e-4023-85ec-3b82c2e536a6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:128bea3f-de1e-4023-85ec-3b82c2e536a6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:08 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:08.565+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '128bea3f-de1e-4023-85ec-3b82c2e536a6' of type subvolume Nov 23 05:10:08 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '128bea3f-de1e-4023-85ec-3b82c2e536a6' of type subvolume Nov 23 05:10:08 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "128bea3f-de1e-4023-85ec-3b82c2e536a6", "force": true, "format": "json"}]: dispatch Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:128bea3f-de1e-4023-85ec-3b82c2e536a6, vol_name:cephfs) < "" Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/128bea3f-de1e-4023-85ec-3b82c2e536a6'' moved to trashcan Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:128bea3f-de1e-4023-85ec-3b82c2e536a6, vol_name:cephfs) < "" Nov 23 05:10:08 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "ff791c0e-f88c-4daa-a7d3-3098e61f94ca_f78b03f5-7a9b-4515-9034-ad766e103e0a", "force": true, "format": "json"}]: dispatch Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ff791c0e-f88c-4daa-a7d3-3098e61f94ca_f78b03f5-7a9b-4515-9034-ad766e103e0a, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta' Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ff791c0e-f88c-4daa-a7d3-3098e61f94ca_f78b03f5-7a9b-4515-9034-ad766e103e0a, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:10:08 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "ff791c0e-f88c-4daa-a7d3-3098e61f94ca", "force": true, "format": "json"}]: dispatch Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ff791c0e-f88c-4daa-a7d3-3098e61f94ca, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta' Nov 23 05:10:08 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ff791c0e-f88c-4daa-a7d3-3098e61f94ca, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:10:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v623: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 107 KiB/s wr, 6 op/s Nov 23 05:10:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:10:09.750 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:10:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:10:09.751 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:10:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:10:09.751 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:10:09 localhost nova_compute[280939]: 2025-11-23 10:10:09.794 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:11 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "346654a1-8043-457a-94b6-3b076c21a1d5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:10:11 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:346654a1-8043-457a-94b6-3b076c21a1d5, vol_name:cephfs) < "" Nov 23 05:10:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v624: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 500 B/s rd, 105 KiB/s wr, 6 op/s Nov 23 05:10:11 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/346654a1-8043-457a-94b6-3b076c21a1d5/.meta.tmp' Nov 23 05:10:11 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/346654a1-8043-457a-94b6-3b076c21a1d5/.meta.tmp' to config b'/volumes/_nogroup/346654a1-8043-457a-94b6-3b076c21a1d5/.meta' Nov 23 05:10:11 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:346654a1-8043-457a-94b6-3b076c21a1d5, vol_name:cephfs) < "" Nov 23 05:10:12 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "346654a1-8043-457a-94b6-3b076c21a1d5", "format": "json"}]: dispatch Nov 23 05:10:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:346654a1-8043-457a-94b6-3b076c21a1d5, vol_name:cephfs) < "" Nov 23 05:10:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:346654a1-8043-457a-94b6-3b076c21a1d5, vol_name:cephfs) < "" Nov 23 05:10:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:10:12 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:10:12 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 05:10:12 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 5762 writes, 38K keys, 5762 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.05 MB/s#012Cumulative WAL: 5762 writes, 5762 syncs, 1.00 writes per sync, written: 0.06 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2514 writes, 12K keys, 2514 commit groups, 1.0 writes per commit group, ingest: 12.02 MB, 0.02 MB/s#012Interval WAL: 2514 writes, 2514 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 180.6 0.26 0.12 20 0.013 0 0 0.0 0.0#012 L6 1/0 17.76 MB 0.0 0.3 0.0 0.3 0.3 0.0 0.0 6.6 221.2 202.0 1.53 0.86 19 0.081 239K 9858 0.0 0.0#012 Sum 1/0 17.76 MB 0.0 0.3 0.0 0.3 0.3 0.1 0.0 7.6 189.0 198.9 1.79 0.98 39 0.046 239K 9858 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 11.1 190.3 192.5 0.74 0.42 16 0.046 109K 4305 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.3 0.0 0.3 0.3 0.0 0.0 0.0 221.2 202.0 1.53 0.86 19 0.081 239K 9858 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 182.7 0.26 0.12 19 0.014 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.046, interval 0.013#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.35 GB write, 0.30 MB/s write, 0.33 GB read, 0.28 MB/s read, 1.8 seconds#012Interval compaction: 0.14 GB write, 0.24 MB/s write, 0.14 GB read, 0.24 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556054b3b350#2 capacity: 304.00 MB usage: 61.21 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.00044 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4250,59.66 MB,19.6246%) FilterBlock(39,686.80 KB,0.220625%) IndexBlock(39,897.95 KB,0.288456%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Nov 23 05:10:12 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "50ef3b94-fa58-4aa7-8f5f-5b4e06f993be", "format": "json"}]: dispatch Nov 23 05:10:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:50ef3b94-fa58-4aa7-8f5f-5b4e06f993be, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:50ef3b94-fa58-4aa7-8f5f-5b4e06f993be, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:12 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:12.085+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '50ef3b94-fa58-4aa7-8f5f-5b4e06f993be' of type subvolume Nov 23 05:10:12 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '50ef3b94-fa58-4aa7-8f5f-5b4e06f993be' of type subvolume Nov 23 05:10:12 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "50ef3b94-fa58-4aa7-8f5f-5b4e06f993be", "force": true, "format": "json"}]: dispatch Nov 23 05:10:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:50ef3b94-fa58-4aa7-8f5f-5b4e06f993be, vol_name:cephfs) < "" Nov 23 05:10:12 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/50ef3b94-fa58-4aa7-8f5f-5b4e06f993be'' moved to trashcan Nov 23 05:10:12 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:10:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:50ef3b94-fa58-4aa7-8f5f-5b4e06f993be, vol_name:cephfs) < "" Nov 23 05:10:12 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "97e78d12-95be-4f55-b739-9cd5b5c20fd4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:10:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:97e78d12-95be-4f55-b739-9cd5b5c20fd4, vol_name:cephfs) < "" Nov 23 05:10:12 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4/.meta.tmp' Nov 23 05:10:12 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4/.meta.tmp' to config b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4/.meta' Nov 23 05:10:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:97e78d12-95be-4f55-b739-9cd5b5c20fd4, vol_name:cephfs) < "" Nov 23 05:10:12 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "97e78d12-95be-4f55-b739-9cd5b5c20fd4", "format": "json"}]: dispatch Nov 23 05:10:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:97e78d12-95be-4f55-b739-9cd5b5c20fd4, vol_name:cephfs) < "" Nov 23 05:10:12 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:97e78d12-95be-4f55-b739-9cd5b5c20fd4, vol_name:cephfs) < "" Nov 23 05:10:12 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:10:12 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:10:13 localhost nova_compute[280939]: 2025-11-23 10:10:13.105 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:10:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v625: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 86 KiB/s wr, 5 op/s Nov 23 05:10:13 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "05a84a88-73a6-4e8e-8d02-fae54fb3ac71_e44acd9b-b075-41e3-8a7b-1ec7c3a6ac20", "force": true, "format": "json"}]: dispatch Nov 23 05:10:13 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:05a84a88-73a6-4e8e-8d02-fae54fb3ac71_e44acd9b-b075-41e3-8a7b-1ec7c3a6ac20, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:10:13 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' Nov 23 05:10:13 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta' Nov 23 05:10:13 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:05a84a88-73a6-4e8e-8d02-fae54fb3ac71_e44acd9b-b075-41e3-8a7b-1ec7c3a6ac20, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:10:13 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "snap_name": "05a84a88-73a6-4e8e-8d02-fae54fb3ac71", "force": true, "format": "json"}]: dispatch Nov 23 05:10:13 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:05a84a88-73a6-4e8e-8d02-fae54fb3ac71, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:10:13 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' Nov 23 05:10:13 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta.tmp' to config b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf/.meta' Nov 23 05:10:13 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:05a84a88-73a6-4e8e-8d02-fae54fb3ac71, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:10:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e264 do_prune osdmap full prune enabled Nov 23 05:10:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e265 e265: 6 total, 6 up, 6 in Nov 23 05:10:13 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e265: 6 total, 6 up, 6 in Nov 23 05:10:14 localhost nova_compute[280939]: 2025-11-23 10:10:14.795 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:10:14 localhost podman[327040]: 2025-11-23 10:10:14.894632769 +0000 UTC m=+0.080945510 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, version=9.6, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64) Nov 23 05:10:14 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "60134a7e-4ba8-46c5-95a6-e7b9f66ce444", "format": "json"}]: dispatch Nov 23 05:10:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:60134a7e-4ba8-46c5-95a6-e7b9f66ce444, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:14 localhost podman[327040]: 2025-11-23 10:10:14.93261099 +0000 UTC m=+0.118923721 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Nov 23 05:10:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:60134a7e-4ba8-46c5-95a6-e7b9f66ce444, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:14 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:14.935+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '60134a7e-4ba8-46c5-95a6-e7b9f66ce444' of type subvolume Nov 23 05:10:14 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '60134a7e-4ba8-46c5-95a6-e7b9f66ce444' of type subvolume Nov 23 05:10:14 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "60134a7e-4ba8-46c5-95a6-e7b9f66ce444", "force": true, "format": "json"}]: dispatch Nov 23 05:10:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:60134a7e-4ba8-46c5-95a6-e7b9f66ce444, vol_name:cephfs) < "" Nov 23 05:10:14 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:10:14 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/60134a7e-4ba8-46c5-95a6-e7b9f66ce444'' moved to trashcan Nov 23 05:10:14 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:10:14 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:60134a7e-4ba8-46c5-95a6-e7b9f66ce444, vol_name:cephfs) < "" Nov 23 05:10:15 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "346654a1-8043-457a-94b6-3b076c21a1d5", "auth_id": "Joe", "tenant_id": "00b5d8dd3ef84698af3b5d7d8243bc4a", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:10:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:346654a1-8043-457a-94b6-3b076c21a1d5, tenant_id:00b5d8dd3ef84698af3b5d7d8243bc4a, vol_name:cephfs) < "" Nov 23 05:10:15 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Nov 23 05:10:15 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Nov 23 05:10:15 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID Joe with tenant 00b5d8dd3ef84698af3b5d7d8243bc4a Nov 23 05:10:15 localhost nova_compute[280939]: 2025-11-23 10:10:15.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:10:15 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/346654a1-8043-457a-94b6-3b076c21a1d5/cb129bf7-168e-4c36-9f32-82b35d885736", "osd", "allow rw pool=manila_data namespace=fsvolumens_346654a1-8043-457a-94b6-3b076c21a1d5", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:10:15 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/346654a1-8043-457a-94b6-3b076c21a1d5/cb129bf7-168e-4c36-9f32-82b35d885736", "osd", "allow rw pool=manila_data namespace=fsvolumens_346654a1-8043-457a-94b6-3b076c21a1d5", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:10:15 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/346654a1-8043-457a-94b6-3b076c21a1d5/cb129bf7-168e-4c36-9f32-82b35d885736", "osd", "allow rw pool=manila_data namespace=fsvolumens_346654a1-8043-457a-94b6-3b076c21a1d5", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:10:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:346654a1-8043-457a-94b6-3b076c21a1d5, tenant_id:00b5d8dd3ef84698af3b5d7d8243bc4a, vol_name:cephfs) < "" Nov 23 05:10:15 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "97e78d12-95be-4f55-b739-9cd5b5c20fd4", "snap_name": "4827a0db-6fdf-413e-96d9-4e5308156408", "format": "json"}]: dispatch Nov 23 05:10:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4827a0db-6fdf-413e-96d9-4e5308156408, sub_name:97e78d12-95be-4f55-b739-9cd5b5c20fd4, vol_name:cephfs) < "" Nov 23 05:10:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4827a0db-6fdf-413e-96d9-4e5308156408, sub_name:97e78d12-95be-4f55-b739-9cd5b5c20fd4, vol_name:cephfs) < "" Nov 23 05:10:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v627: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 150 KiB/s wr, 9 op/s Nov 23 05:10:15 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Nov 23 05:10:15 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/346654a1-8043-457a-94b6-3b076c21a1d5/cb129bf7-168e-4c36-9f32-82b35d885736", "osd", "allow rw pool=manila_data namespace=fsvolumens_346654a1-8043-457a-94b6-3b076c21a1d5", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:10:15 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/346654a1-8043-457a-94b6-3b076c21a1d5/cb129bf7-168e-4c36-9f32-82b35d885736", "osd", "allow rw pool=manila_data namespace=fsvolumens_346654a1-8043-457a-94b6-3b076c21a1d5", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:10:16 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "format": "json"}]: dispatch Nov 23 05:10:16 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:16 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:16 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:16.878+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '57912552-9fe8-4f36-8630-fac59a7e7ddf' of type subvolume Nov 23 05:10:16 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '57912552-9fe8-4f36-8630-fac59a7e7ddf' of type subvolume Nov 23 05:10:16 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "57912552-9fe8-4f36-8630-fac59a7e7ddf", "force": true, "format": "json"}]: dispatch Nov 23 05:10:16 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:10:16 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/57912552-9fe8-4f36-8630-fac59a7e7ddf'' moved to trashcan Nov 23 05:10:16 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:10:16 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57912552-9fe8-4f36-8630-fac59a7e7ddf, vol_name:cephfs) < "" Nov 23 05:10:17 localhost podman[239764]: time="2025-11-23T10:10:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:10:17 localhost podman[239764]: @ - - [23/Nov/2025:10:10:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:10:17 localhost nova_compute[280939]: 2025-11-23 10:10:17.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:10:17 localhost podman[239764]: @ - - [23/Nov/2025:10:10:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18765 "" "Go-http-client/1.1" Nov 23 05:10:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v628: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 662 B/s rd, 130 KiB/s wr, 8 op/s Nov 23 05:10:18 localhost nova_compute[280939]: 2025-11-23 10:10:18.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:10:18 localhost nova_compute[280939]: 2025-11-23 10:10:18.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 05:10:18 localhost nova_compute[280939]: 2025-11-23 10:10:18.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 05:10:18 localhost nova_compute[280939]: 2025-11-23 10:10:18.147 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:18 localhost nova_compute[280939]: 2025-11-23 10:10:18.153 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 05:10:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e265 do_prune osdmap full prune enabled Nov 23 05:10:18 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b9ef5055-ce0d-4f29-b449-58f39c1f00af", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:10:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, vol_name:cephfs) < "" Nov 23 05:10:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e266 e266: 6 total, 6 up, 6 in Nov 23 05:10:18 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e266: 6 total, 6 up, 6 in Nov 23 05:10:18 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b9ef5055-ce0d-4f29-b449-58f39c1f00af/.meta.tmp' Nov 23 05:10:18 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b9ef5055-ce0d-4f29-b449-58f39c1f00af/.meta.tmp' to config b'/volumes/_nogroup/b9ef5055-ce0d-4f29-b449-58f39c1f00af/.meta' Nov 23 05:10:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, vol_name:cephfs) < "" Nov 23 05:10:18 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b9ef5055-ce0d-4f29-b449-58f39c1f00af", "format": "json"}]: dispatch Nov 23 05:10:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, vol_name:cephfs) < "" Nov 23 05:10:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, vol_name:cephfs) < "" Nov 23 05:10:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:10:18 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:10:18 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "97e78d12-95be-4f55-b739-9cd5b5c20fd4", "snap_name": "4827a0db-6fdf-413e-96d9-4e5308156408", "target_sub_name": "7d4a8f6e-3803-4134-a981-5fbbe44d23bd", "format": "json"}]: dispatch Nov 23 05:10:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:4827a0db-6fdf-413e-96d9-4e5308156408, sub_name:97e78d12-95be-4f55-b739-9cd5b5c20fd4, target_sub_name:7d4a8f6e-3803-4134-a981-5fbbe44d23bd, vol_name:cephfs) < "" Nov 23 05:10:19 localhost nova_compute[280939]: 2025-11-23 10:10:19.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:10:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/7d4a8f6e-3803-4134-a981-5fbbe44d23bd/.meta.tmp' Nov 23 05:10:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7d4a8f6e-3803-4134-a981-5fbbe44d23bd/.meta.tmp' to config b'/volumes/_nogroup/7d4a8f6e-3803-4134-a981-5fbbe44d23bd/.meta' Nov 23 05:10:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 8b8768df-60bf-4dac-a2fd-1035521020d9 for path b'/volumes/_nogroup/7d4a8f6e-3803-4134-a981-5fbbe44d23bd' Nov 23 05:10:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4/.meta.tmp' Nov 23 05:10:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4/.meta.tmp' to config b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4/.meta' Nov 23 05:10:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:10:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:4827a0db-6fdf-413e-96d9-4e5308156408, sub_name:97e78d12-95be-4f55-b739-9cd5b5c20fd4, target_sub_name:7d4a8f6e-3803-4134-a981-5fbbe44d23bd, vol_name:cephfs) < "" Nov 23 05:10:19 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7d4a8f6e-3803-4134-a981-5fbbe44d23bd", "format": "json"}]: dispatch Nov 23 05:10:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7d4a8f6e-3803-4134-a981-5fbbe44d23bd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:19.263+0000 7f9ccdb14640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:19.263+0000 7f9ccdb14640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:19.263+0000 7f9ccdb14640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:19.263+0000 7f9ccdb14640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:19.263+0000 7f9ccdb14640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7d4a8f6e-3803-4134-a981-5fbbe44d23bd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/7d4a8f6e-3803-4134-a981-5fbbe44d23bd Nov 23 05:10:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 7d4a8f6e-3803-4134-a981-5fbbe44d23bd) Nov 23 05:10:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:19.297+0000 7f9ccc311640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:19.297+0000 7f9ccc311640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:19.297+0000 7f9ccc311640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:19.297+0000 7f9ccc311640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:19.297+0000 7f9ccc311640 -1 client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-mgr[286671]: client.0 error registering admin socket command: (17) File exists Nov 23 05:10:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 7d4a8f6e-3803-4134-a981-5fbbe44d23bd) -- by 0 seconds Nov 23 05:10:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/7d4a8f6e-3803-4134-a981-5fbbe44d23bd/.meta.tmp' Nov 23 05:10:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7d4a8f6e-3803-4134-a981-5fbbe44d23bd/.meta.tmp' to config b'/volumes/_nogroup/7d4a8f6e-3803-4134-a981-5fbbe44d23bd/.meta' Nov 23 05:10:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v630: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.4 KiB/s rd, 195 KiB/s wr, 13 op/s Nov 23 05:10:19 localhost nova_compute[280939]: 2025-11-23 10:10:19.798 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:20 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e51: np0005532584.naxwxy(active, since 14m), standbys: np0005532585.gzafiw, np0005532586.thmvqb Nov 23 05:10:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:10:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:10:20 localhost podman[327084]: 2025-11-23 10:10:20.90195327 +0000 UTC m=+0.088255173 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute) Nov 23 05:10:20 localhost podman[327085]: 2025-11-23 10:10:20.872942092 +0000 UTC m=+0.059169752 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:10:20 localhost podman[327084]: 2025-11-23 10:10:20.941413117 +0000 UTC m=+0.127714980 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Nov 23 05:10:20 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:10:20 localhost podman[327085]: 2025-11-23 10:10:20.955315657 +0000 UTC m=+0.141543287 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:10:20 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:10:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v631: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.4 KiB/s rd, 195 KiB/s wr, 13 op/s Nov 23 05:10:22 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b9ef5055-ce0d-4f29-b449-58f39c1f00af", "auth_id": "Joe", "tenant_id": "7380a21b16c54c28b8907c4cc3ce9099", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:10:22 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, tenant_id:7380a21b16c54c28b8907c4cc3ce9099, vol_name:cephfs) < "" Nov 23 05:10:23 localhost nova_compute[280939]: 2025-11-23 10:10:23.177 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_10:10:23 Nov 23 05:10:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 05:10:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 05:10:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['images', 'manila_data', 'backups', 'manila_metadata', 'volumes', '.mgr', 'vms'] Nov 23 05:10:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 05:10:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Nov 23 05:10:23 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, tenant_id:7380a21b16c54c28b8907c4cc3ce9099, vol_name:cephfs) < "" Nov 23 05:10:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:23.342+0000 7f9cc7b08640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Nov 23 05:10:23 localhost ceph-mgr[286671]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4/.snap/4827a0db-6fdf-413e-96d9-4e5308156408/4623893d-2a42-40bb-b5dd-a55a0d05098f' to b'/volumes/_nogroup/7d4a8f6e-3803-4134-a981-5fbbe44d23bd/7dc40fc1-dc61-4052-83da-bc085b46fe41' Nov 23 05:10:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:10:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e266 do_prune osdmap full prune enabled Nov 23 05:10:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e267 e267: 6 total, 6 up, 6 in Nov 23 05:10:23 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e267: 6 total, 6 up, 6 in Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/7d4a8f6e-3803-4134-a981-5fbbe44d23bd/.meta.tmp' Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7d4a8f6e-3803-4134-a981-5fbbe44d23bd/.meta.tmp' to config b'/volumes/_nogroup/7d4a8f6e-3803-4134-a981-5fbbe44d23bd/.meta' Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.clone_index] untracking 8b8768df-60bf-4dac-a2fd-1035521020d9 Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4/.meta.tmp' Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4/.meta.tmp' to config b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4/.meta' Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/7d4a8f6e-3803-4134-a981-5fbbe44d23bd/.meta.tmp' Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7d4a8f6e-3803-4134-a981-5fbbe44d23bd/.meta.tmp' to config b'/volumes/_nogroup/7d4a8f6e-3803-4134-a981-5fbbe44d23bd/.meta' Nov 23 05:10:23 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 7d4a8f6e-3803-4134-a981-5fbbe44d23bd) Nov 23 05:10:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v633: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 895 B/s rd, 114 KiB/s wr, 8 op/s Nov 23 05:10:23 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Nov 23 05:10:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 05:10:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:10:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 05:10:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:10:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Nov 23 05:10:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:10:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Nov 23 05:10:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:10:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 23 05:10:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:10:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 23 05:10:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:10:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.453674623115578e-06 of space, bias 1.0, pg target 0.00048828125 quantized to 32 (current 32) Nov 23 05:10:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:10:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002041184655918481 of space, bias 4.0, pg target 1.624782986111111 quantized to 16 (current 16) Nov 23 05:10:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 05:10:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:10:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:10:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 05:10:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:10:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:10:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:10:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:10:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:10:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:10:24 localhost nova_compute[280939]: 2025-11-23 10:10:24.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:10:24 localhost nova_compute[280939]: 2025-11-23 10:10:24.799 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:25 localhost nova_compute[280939]: 2025-11-23 10:10:25.131 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:10:25 localhost nova_compute[280939]: 2025-11-23 10:10:25.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 05:10:25 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b9ef5055-ce0d-4f29-b449-58f39c1f00af", "auth_id": "tempest-cephx-id-272049507", "tenant_id": "7380a21b16c54c28b8907c4cc3ce9099", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:10:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-272049507, format:json, prefix:fs subvolume authorize, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, tenant_id:7380a21b16c54c28b8907c4cc3ce9099, vol_name:cephfs) < "" Nov 23 05:10:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-272049507", "format": "json"} v 0) Nov 23 05:10:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-272049507", "format": "json"} : dispatch Nov 23 05:10:25 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID tempest-cephx-id-272049507 with tenant 7380a21b16c54c28b8907c4cc3ce9099 Nov 23 05:10:25 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-272049507", "format": "json"} : dispatch Nov 23 05:10:25 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-272049507", "caps": ["mds", "allow rw path=/volumes/_nogroup/b9ef5055-ce0d-4f29-b449-58f39c1f00af/ad7774b9-dfc0-45be-aa87-07b19521a8eb", "osd", "allow rw pool=manila_data namespace=fsvolumens_b9ef5055-ce0d-4f29-b449-58f39c1f00af", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:10:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-272049507", "caps": ["mds", "allow rw path=/volumes/_nogroup/b9ef5055-ce0d-4f29-b449-58f39c1f00af/ad7774b9-dfc0-45be-aa87-07b19521a8eb", "osd", "allow rw pool=manila_data namespace=fsvolumens_b9ef5055-ce0d-4f29-b449-58f39c1f00af", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:10:25 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-272049507", "caps": ["mds", "allow rw path=/volumes/_nogroup/b9ef5055-ce0d-4f29-b449-58f39c1f00af/ad7774b9-dfc0-45be-aa87-07b19521a8eb", "osd", "allow rw pool=manila_data namespace=fsvolumens_b9ef5055-ce0d-4f29-b449-58f39c1f00af", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:10:25 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-272049507, format:json, prefix:fs subvolume authorize, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, tenant_id:7380a21b16c54c28b8907c4cc3ce9099, vol_name:cephfs) < "" Nov 23 05:10:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v634: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 165 KiB/s wr, 11 op/s Nov 23 05:10:26 localhost nova_compute[280939]: 2025-11-23 10:10:26.128 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:10:26 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-272049507", "caps": ["mds", "allow rw path=/volumes/_nogroup/b9ef5055-ce0d-4f29-b449-58f39c1f00af/ad7774b9-dfc0-45be-aa87-07b19521a8eb", "osd", "allow rw pool=manila_data namespace=fsvolumens_b9ef5055-ce0d-4f29-b449-58f39c1f00af", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:10:26 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-272049507", "caps": ["mds", "allow rw path=/volumes/_nogroup/b9ef5055-ce0d-4f29-b449-58f39c1f00af/ad7774b9-dfc0-45be-aa87-07b19521a8eb", "osd", "allow rw pool=manila_data namespace=fsvolumens_b9ef5055-ce0d-4f29-b449-58f39c1f00af", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:10:27 localhost nova_compute[280939]: 2025-11-23 10:10:27.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:10:27 localhost nova_compute[280939]: 2025-11-23 10:10:27.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:10:27 localhost nova_compute[280939]: 2025-11-23 10:10:27.149 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:10:27 localhost nova_compute[280939]: 2025-11-23 10:10:27.150 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:10:27 localhost nova_compute[280939]: 2025-11-23 10:10:27.150 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:10:27 localhost nova_compute[280939]: 2025-11-23 10:10:27.150 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 05:10:27 localhost nova_compute[280939]: 2025-11-23 10:10:27.151 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:10:27 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:10:27 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/466529091' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:10:27 localhost nova_compute[280939]: 2025-11-23 10:10:27.601 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:10:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v635: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 145 KiB/s wr, 10 op/s Nov 23 05:10:27 localhost nova_compute[280939]: 2025-11-23 10:10:27.779 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 05:10:27 localhost nova_compute[280939]: 2025-11-23 10:10:27.781 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11402MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 05:10:27 localhost nova_compute[280939]: 2025-11-23 10:10:27.782 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:10:27 localhost nova_compute[280939]: 2025-11-23 10:10:27.783 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:10:27 localhost nova_compute[280939]: 2025-11-23 10:10:27.854 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 05:10:27 localhost nova_compute[280939]: 2025-11-23 10:10:27.855 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 05:10:27 localhost nova_compute[280939]: 2025-11-23 10:10:27.873 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:10:28 localhost nova_compute[280939]: 2025-11-23 10:10:28.214 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:10:28 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3979830438' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:10:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:10:28 localhost nova_compute[280939]: 2025-11-23 10:10:28.355 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:10:28 localhost nova_compute[280939]: 2025-11-23 10:10:28.361 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 05:10:28 localhost nova_compute[280939]: 2025-11-23 10:10:28.378 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 05:10:28 localhost nova_compute[280939]: 2025-11-23 10:10:28.381 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 05:10:28 localhost nova_compute[280939]: 2025-11-23 10:10:28.382 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:10:28 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b9ef5055-ce0d-4f29-b449-58f39c1f00af", "auth_id": "Joe", "format": "json"}]: dispatch Nov 23 05:10:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, vol_name:cephfs) < "" Nov 23 05:10:28 localhost ceph-mgr[286671]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume 'b9ef5055-ce0d-4f29-b449-58f39c1f00af' Nov 23 05:10:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, vol_name:cephfs) < "" Nov 23 05:10:28 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b9ef5055-ce0d-4f29-b449-58f39c1f00af", "auth_id": "Joe", "format": "json"}]: dispatch Nov 23 05:10:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, vol_name:cephfs) < "" Nov 23 05:10:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/b9ef5055-ce0d-4f29-b449-58f39c1f00af/ad7774b9-dfc0-45be-aa87-07b19521a8eb Nov 23 05:10:28 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:10:28 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, vol_name:cephfs) < "" Nov 23 05:10:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v636: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 72 KiB/s wr, 5 op/s Nov 23 05:10:29 localhost nova_compute[280939]: 2025-11-23 10:10:29.801 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v637: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 72 KiB/s wr, 5 op/s Nov 23 05:10:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:10:31 localhost podman[327171]: 2025-11-23 10:10:31.897316518 +0000 UTC m=+0.079627049 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:10:31 localhost podman[327171]: 2025-11-23 10:10:31.907482269 +0000 UTC m=+0.089792810 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:10:31 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:10:32 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b9ef5055-ce0d-4f29-b449-58f39c1f00af", "auth_id": "tempest-cephx-id-272049507", "format": "json"}]: dispatch Nov 23 05:10:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-272049507, format:json, prefix:fs subvolume deauthorize, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, vol_name:cephfs) < "" Nov 23 05:10:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-272049507", "format": "json"} v 0) Nov 23 05:10:32 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-272049507", "format": "json"} : dispatch Nov 23 05:10:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-272049507"} v 0) Nov 23 05:10:32 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-272049507"} : dispatch Nov 23 05:10:32 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-272049507"}]': finished Nov 23 05:10:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-272049507, format:json, prefix:fs subvolume deauthorize, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, vol_name:cephfs) < "" Nov 23 05:10:32 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b9ef5055-ce0d-4f29-b449-58f39c1f00af", "auth_id": "tempest-cephx-id-272049507", "format": "json"}]: dispatch Nov 23 05:10:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-272049507, format:json, prefix:fs subvolume evict, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, vol_name:cephfs) < "" Nov 23 05:10:32 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-272049507, client_metadata.root=/volumes/_nogroup/b9ef5055-ce0d-4f29-b449-58f39c1f00af/ad7774b9-dfc0-45be-aa87-07b19521a8eb Nov 23 05:10:32 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:10:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-272049507, format:json, prefix:fs subvolume evict, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, vol_name:cephfs) < "" Nov 23 05:10:32 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-272049507", "format": "json"} : dispatch Nov 23 05:10:32 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-272049507"} : dispatch Nov 23 05:10:32 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-272049507"}]': finished Nov 23 05:10:33 localhost nova_compute[280939]: 2025-11-23 10:10:33.250 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:10:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v638: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 598 B/s rd, 70 KiB/s wr, 5 op/s Nov 23 05:10:34 localhost nova_compute[280939]: 2025-11-23 10:10:34.804 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:35 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "346654a1-8043-457a-94b6-3b076c21a1d5", "auth_id": "Joe", "format": "json"}]: dispatch Nov 23 05:10:35 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:346654a1-8043-457a-94b6-3b076c21a1d5, vol_name:cephfs) < "" Nov 23 05:10:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Nov 23 05:10:35 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Nov 23 05:10:35 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0) Nov 23 05:10:35 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Nov 23 05:10:35 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Nov 23 05:10:35 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:346654a1-8043-457a-94b6-3b076c21a1d5, vol_name:cephfs) < "" Nov 23 05:10:35 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "346654a1-8043-457a-94b6-3b076c21a1d5", "auth_id": "Joe", "format": "json"}]: dispatch Nov 23 05:10:35 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:346654a1-8043-457a-94b6-3b076c21a1d5, vol_name:cephfs) < "" Nov 23 05:10:35 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/346654a1-8043-457a-94b6-3b076c21a1d5/cb129bf7-168e-4c36-9f32-82b35d885736 Nov 23 05:10:35 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:10:35 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:346654a1-8043-457a-94b6-3b076c21a1d5, vol_name:cephfs) < "" Nov 23 05:10:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v639: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 98 KiB/s wr, 6 op/s Nov 23 05:10:36 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e5534047-ca04-4edc-8696-f70777010bec", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:10:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e5534047-ca04-4edc-8696-f70777010bec, vol_name:cephfs) < "" Nov 23 05:10:36 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e5534047-ca04-4edc-8696-f70777010bec/.meta.tmp' Nov 23 05:10:36 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e5534047-ca04-4edc-8696-f70777010bec/.meta.tmp' to config b'/volumes/_nogroup/e5534047-ca04-4edc-8696-f70777010bec/.meta' Nov 23 05:10:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e5534047-ca04-4edc-8696-f70777010bec, vol_name:cephfs) < "" Nov 23 05:10:36 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e5534047-ca04-4edc-8696-f70777010bec", "format": "json"}]: dispatch Nov 23 05:10:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e5534047-ca04-4edc-8696-f70777010bec, vol_name:cephfs) < "" Nov 23 05:10:36 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e5534047-ca04-4edc-8696-f70777010bec, vol_name:cephfs) < "" Nov 23 05:10:36 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:10:36 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:10:36 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Nov 23 05:10:36 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Nov 23 05:10:36 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Nov 23 05:10:36 localhost openstack_network_exporter[241732]: ERROR 10:10:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:10:36 localhost openstack_network_exporter[241732]: ERROR 10:10:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:10:36 localhost openstack_network_exporter[241732]: ERROR 10:10:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:10:36 localhost openstack_network_exporter[241732]: ERROR 10:10:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:10:36 localhost openstack_network_exporter[241732]: Nov 23 05:10:36 localhost openstack_network_exporter[241732]: ERROR 10:10:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:10:36 localhost openstack_network_exporter[241732]: Nov 23 05:10:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v640: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 64 KiB/s wr, 4 op/s Nov 23 05:10:38 localhost nova_compute[280939]: 2025-11-23 10:10:38.299 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:10:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:10:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:10:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:10:38 localhost podman[327191]: 2025-11-23 10:10:38.888049997 +0000 UTC m=+0.075168228 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118) Nov 23 05:10:38 localhost podman[327191]: 2025-11-23 10:10:38.902361959 +0000 UTC m=+0.089480180 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:10:38 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:10:38 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "73463559-de38-4e65-91fe-e256e1993ef1", "auth_id": "admin", "tenant_id": "00b5d8dd3ef84698af3b5d7d8243bc4a", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:10:38 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:73463559-de38-4e65-91fe-e256e1993ef1, tenant_id:00b5d8dd3ef84698af3b5d7d8243bc4a, vol_name:cephfs) < "" Nov 23 05:10:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0) Nov 23 05:10:38 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Nov 23 05:10:38 localhost ceph-mgr[286671]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify Nov 23 05:10:38 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:73463559-de38-4e65-91fe-e256e1993ef1, tenant_id:00b5d8dd3ef84698af3b5d7d8243bc4a, vol_name:cephfs) < "" Nov 23 05:10:38 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:38.929+0000 7f9cc7b08640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Nov 23 05:10:38 localhost ceph-mgr[286671]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Nov 23 05:10:38 localhost systemd[1]: tmp-crun.Y95ATM.mount: Deactivated successfully. Nov 23 05:10:38 localhost podman[327193]: 2025-11-23 10:10:38.959429974 +0000 UTC m=+0.139164752 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:10:38 localhost podman[327193]: 2025-11-23 10:10:38.995464494 +0000 UTC m=+0.175199272 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 23 05:10:39 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:10:39 localhost podman[327192]: 2025-11-23 10:10:39.047135447 +0000 UTC m=+0.231387288 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:10:39 localhost podman[327192]: 2025-11-23 10:10:39.060382107 +0000 UTC m=+0.244633948 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:10:39 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:10:39 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e5534047-ca04-4edc-8696-f70777010bec", "snap_name": "cde814c2-996b-4a4f-8643-46a807416ffa", "format": "json"}]: dispatch Nov 23 05:10:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cde814c2-996b-4a4f-8643-46a807416ffa, sub_name:e5534047-ca04-4edc-8696-f70777010bec, vol_name:cephfs) < "" Nov 23 05:10:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cde814c2-996b-4a4f-8643-46a807416ffa, sub_name:e5534047-ca04-4edc-8696-f70777010bec, vol_name:cephfs) < "" Nov 23 05:10:39 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Nov 23 05:10:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v641: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 113 KiB/s wr, 7 op/s Nov 23 05:10:39 localhost nova_compute[280939]: 2025-11-23 10:10:39.807 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v642: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 87 KiB/s wr, 5 op/s Nov 23 05:10:42 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "73463559-de38-4e65-91fe-e256e1993ef1", "auth_id": "david", "tenant_id": "00b5d8dd3ef84698af3b5d7d8243bc4a", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:10:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:73463559-de38-4e65-91fe-e256e1993ef1, tenant_id:00b5d8dd3ef84698af3b5d7d8243bc4a, vol_name:cephfs) < "" Nov 23 05:10:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Nov 23 05:10:42 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Nov 23 05:10:42 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: Creating meta for ID david with tenant 00b5d8dd3ef84698af3b5d7d8243bc4a Nov 23 05:10:42 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Nov 23 05:10:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/73463559-de38-4e65-91fe-e256e1993ef1/5697482a-5566-4614-b3e6-cbfdb66450d2", "osd", "allow rw pool=manila_data namespace=fsvolumens_73463559-de38-4e65-91fe-e256e1993ef1", "mon", "allow r"], "format": "json"} v 0) Nov 23 05:10:42 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/73463559-de38-4e65-91fe-e256e1993ef1/5697482a-5566-4614-b3e6-cbfdb66450d2", "osd", "allow rw pool=manila_data namespace=fsvolumens_73463559-de38-4e65-91fe-e256e1993ef1", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:10:42 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/73463559-de38-4e65-91fe-e256e1993ef1/5697482a-5566-4614-b3e6-cbfdb66450d2", "osd", "allow rw pool=manila_data namespace=fsvolumens_73463559-de38-4e65-91fe-e256e1993ef1", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:10:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:73463559-de38-4e65-91fe-e256e1993ef1, tenant_id:00b5d8dd3ef84698af3b5d7d8243bc4a, vol_name:cephfs) < "" Nov 23 05:10:43 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f868dbf5-74f9-4f9b-8554-4f9dd9c9915e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:10:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f868dbf5-74f9-4f9b-8554-4f9dd9c9915e, vol_name:cephfs) < "" Nov 23 05:10:43 localhost nova_compute[280939]: 2025-11-23 10:10:43.327 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:10:43 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f868dbf5-74f9-4f9b-8554-4f9dd9c9915e/.meta.tmp' Nov 23 05:10:43 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f868dbf5-74f9-4f9b-8554-4f9dd9c9915e/.meta.tmp' to config b'/volumes/_nogroup/f868dbf5-74f9-4f9b-8554-4f9dd9c9915e/.meta' Nov 23 05:10:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f868dbf5-74f9-4f9b-8554-4f9dd9c9915e, vol_name:cephfs) < "" Nov 23 05:10:43 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f868dbf5-74f9-4f9b-8554-4f9dd9c9915e", "format": "json"}]: dispatch Nov 23 05:10:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f868dbf5-74f9-4f9b-8554-4f9dd9c9915e, vol_name:cephfs) < "" Nov 23 05:10:43 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f868dbf5-74f9-4f9b-8554-4f9dd9c9915e, vol_name:cephfs) < "" Nov 23 05:10:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:10:43 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:10:43 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/73463559-de38-4e65-91fe-e256e1993ef1/5697482a-5566-4614-b3e6-cbfdb66450d2", "osd", "allow rw pool=manila_data namespace=fsvolumens_73463559-de38-4e65-91fe-e256e1993ef1", "mon", "allow r"], "format": "json"} : dispatch Nov 23 05:10:43 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/73463559-de38-4e65-91fe-e256e1993ef1/5697482a-5566-4614-b3e6-cbfdb66450d2", "osd", "allow rw pool=manila_data namespace=fsvolumens_73463559-de38-4e65-91fe-e256e1993ef1", "mon", "allow r"], "format": "json"}]': finished Nov 23 05:10:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v643: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 87 KiB/s wr, 5 op/s Nov 23 05:10:44 localhost nova_compute[280939]: 2025-11-23 10:10:44.809 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v644: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 126 KiB/s wr, 7 op/s Nov 23 05:10:45 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d73f5489-d8e5-493d-9400-04efa839bc7c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:10:45 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d73f5489-d8e5-493d-9400-04efa839bc7c, vol_name:cephfs) < "" Nov 23 05:10:45 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d73f5489-d8e5-493d-9400-04efa839bc7c/.meta.tmp' Nov 23 05:10:45 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d73f5489-d8e5-493d-9400-04efa839bc7c/.meta.tmp' to config b'/volumes/_nogroup/d73f5489-d8e5-493d-9400-04efa839bc7c/.meta' Nov 23 05:10:45 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d73f5489-d8e5-493d-9400-04efa839bc7c, vol_name:cephfs) < "" Nov 23 05:10:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:10:45 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d73f5489-d8e5-493d-9400-04efa839bc7c", "format": "json"}]: dispatch Nov 23 05:10:45 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d73f5489-d8e5-493d-9400-04efa839bc7c, vol_name:cephfs) < "" Nov 23 05:10:45 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d73f5489-d8e5-493d-9400-04efa839bc7c, vol_name:cephfs) < "" Nov 23 05:10:45 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:10:45 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:10:45 localhost podman[327260]: 2025-11-23 10:10:45.892444237 +0000 UTC m=+0.076143080 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.expose-services=, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-minimal-container) Nov 23 05:10:45 localhost podman[327260]: 2025-11-23 10:10:45.933433762 +0000 UTC m=+0.117132565 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc.) Nov 23 05:10:45 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:10:46 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f868dbf5-74f9-4f9b-8554-4f9dd9c9915e", "format": "json"}]: dispatch Nov 23 05:10:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f868dbf5-74f9-4f9b-8554-4f9dd9c9915e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f868dbf5-74f9-4f9b-8554-4f9dd9c9915e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:46 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:46.698+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f868dbf5-74f9-4f9b-8554-4f9dd9c9915e' of type subvolume Nov 23 05:10:46 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f868dbf5-74f9-4f9b-8554-4f9dd9c9915e' of type subvolume Nov 23 05:10:46 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f868dbf5-74f9-4f9b-8554-4f9dd9c9915e", "force": true, "format": "json"}]: dispatch Nov 23 05:10:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f868dbf5-74f9-4f9b-8554-4f9dd9c9915e, vol_name:cephfs) < "" Nov 23 05:10:46 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f868dbf5-74f9-4f9b-8554-4f9dd9c9915e'' moved to trashcan Nov 23 05:10:46 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:10:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f868dbf5-74f9-4f9b-8554-4f9dd9c9915e, vol_name:cephfs) < "" Nov 23 05:10:47 localhost podman[239764]: time="2025-11-23T10:10:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:10:47 localhost podman[239764]: @ - - [23/Nov/2025:10:10:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:10:47 localhost podman[239764]: @ - - [23/Nov/2025:10:10:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18770 "" "Go-http-client/1.1" Nov 23 05:10:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 88 KiB/s wr, 5 op/s Nov 23 05:10:48 localhost nova_compute[280939]: 2025-11-23 10:10:48.354 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:10:49 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d73f5489-d8e5-493d-9400-04efa839bc7c", "auth_id": "david", "tenant_id": "7380a21b16c54c28b8907c4cc3ce9099", "access_level": "rw", "format": "json"}]: dispatch Nov 23 05:10:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:d73f5489-d8e5-493d-9400-04efa839bc7c, tenant_id:7380a21b16c54c28b8907c4cc3ce9099, vol_name:cephfs) < "" Nov 23 05:10:49 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Nov 23 05:10:49 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Nov 23 05:10:49 localhost ceph-mgr[286671]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use Nov 23 05:10:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:d73f5489-d8e5-493d-9400-04efa839bc7c, tenant_id:7380a21b16c54c28b8907c4cc3ce9099, vol_name:cephfs) < "" Nov 23 05:10:49 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:49.166+0000 7f9cc7b08640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Nov 23 05:10:49 localhost ceph-mgr[286671]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Nov 23 05:10:49 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Nov 23 05:10:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v646: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 123 KiB/s wr, 7 op/s Nov 23 05:10:49 localhost nova_compute[280939]: 2025-11-23 10:10:49.811 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:50 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "026d03c4-9bd7-4b23-aab1-5ab6c0e12032", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:10:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:026d03c4-9bd7-4b23-aab1-5ab6c0e12032, vol_name:cephfs) < "" Nov 23 05:10:50 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/026d03c4-9bd7-4b23-aab1-5ab6c0e12032/.meta.tmp' Nov 23 05:10:50 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/026d03c4-9bd7-4b23-aab1-5ab6c0e12032/.meta.tmp' to config b'/volumes/_nogroup/026d03c4-9bd7-4b23-aab1-5ab6c0e12032/.meta' Nov 23 05:10:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:026d03c4-9bd7-4b23-aab1-5ab6c0e12032, vol_name:cephfs) < "" Nov 23 05:10:50 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "026d03c4-9bd7-4b23-aab1-5ab6c0e12032", "format": "json"}]: dispatch Nov 23 05:10:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:026d03c4-9bd7-4b23-aab1-5ab6c0e12032, vol_name:cephfs) < "" Nov 23 05:10:50 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:026d03c4-9bd7-4b23-aab1-5ab6c0e12032, vol_name:cephfs) < "" Nov 23 05:10:50 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:10:50 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:10:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:10:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:10:51 localhost podman[327316]: 2025-11-23 10:10:51.118550652 +0000 UTC m=+0.078531375 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:10:51 localhost podman[327316]: 2025-11-23 10:10:51.130739127 +0000 UTC m=+0.090719810 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible) Nov 23 05:10:51 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:10:51 localhost podman[327317]: 2025-11-23 10:10:51.178113965 +0000 UTC m=+0.137704465 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 05:10:51 localhost podman[327317]: 2025-11-23 10:10:51.188315418 +0000 UTC m=+0.147905988 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 05:10:51 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:10:51 localhost sshd[327359]: main: sshd: ssh-rsa algorithm is disabled Nov 23 05:10:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v647: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 75 KiB/s wr, 3 op/s Nov 23 05:10:51 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 05:10:51 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 05:10:51 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 05:10:51 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:10:51 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 05:10:51 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:10:51 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev fed5b7b0-2003-4edd-86cc-cd3ab894af54 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:10:51 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev fed5b7b0-2003-4edd-86cc-cd3ab894af54 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:10:51 localhost ceph-mgr[286671]: [progress INFO root] Completed event fed5b7b0-2003-4edd-86cc-cd3ab894af54 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 05:10:51 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 05:10:51 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 05:10:52 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:10:52 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:10:52 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d73f5489-d8e5-493d-9400-04efa839bc7c", "auth_id": "david", "format": "json"}]: dispatch Nov 23 05:10:52 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:d73f5489-d8e5-493d-9400-04efa839bc7c, vol_name:cephfs) < "" Nov 23 05:10:52 localhost ceph-mgr[286671]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume 'd73f5489-d8e5-493d-9400-04efa839bc7c' Nov 23 05:10:52 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:d73f5489-d8e5-493d-9400-04efa839bc7c, vol_name:cephfs) < "" Nov 23 05:10:52 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d73f5489-d8e5-493d-9400-04efa839bc7c", "auth_id": "david", "format": "json"}]: dispatch Nov 23 05:10:52 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:d73f5489-d8e5-493d-9400-04efa839bc7c, vol_name:cephfs) < "" Nov 23 05:10:52 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/d73f5489-d8e5-493d-9400-04efa839bc7c/f4b8d58d-5cc5-4368-a26c-234a4f905dc5 Nov 23 05:10:52 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:10:52 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:d73f5489-d8e5-493d-9400-04efa839bc7c, vol_name:cephfs) < "" Nov 23 05:10:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:10:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:10:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:10:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:10:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:10:53 localhost nova_compute[280939]: 2025-11-23 10:10:53.406 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:10:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:10:53 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "026d03c4-9bd7-4b23-aab1-5ab6c0e12032", "format": "json"}]: dispatch Nov 23 05:10:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:026d03c4-9bd7-4b23-aab1-5ab6c0e12032, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:026d03c4-9bd7-4b23-aab1-5ab6c0e12032, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:53 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:53.599+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '026d03c4-9bd7-4b23-aab1-5ab6c0e12032' of type subvolume Nov 23 05:10:53 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '026d03c4-9bd7-4b23-aab1-5ab6c0e12032' of type subvolume Nov 23 05:10:53 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "026d03c4-9bd7-4b23-aab1-5ab6c0e12032", "force": true, "format": "json"}]: dispatch Nov 23 05:10:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:026d03c4-9bd7-4b23-aab1-5ab6c0e12032, vol_name:cephfs) < "" Nov 23 05:10:53 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/026d03c4-9bd7-4b23-aab1-5ab6c0e12032'' moved to trashcan Nov 23 05:10:53 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:10:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:026d03c4-9bd7-4b23-aab1-5ab6c0e12032, vol_name:cephfs) < "" Nov 23 05:10:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v648: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 75 KiB/s wr, 3 op/s Nov 23 05:10:53 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 05:10:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 05:10:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:10:54 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:10:54 localhost nova_compute[280939]: 2025-11-23 10:10:54.814 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v649: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 113 KiB/s wr, 5 op/s Nov 23 05:10:55 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "73463559-de38-4e65-91fe-e256e1993ef1", "auth_id": "david", "format": "json"}]: dispatch Nov 23 05:10:55 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:73463559-de38-4e65-91fe-e256e1993ef1, vol_name:cephfs) < "" Nov 23 05:10:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Nov 23 05:10:56 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Nov 23 05:10:56 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0) Nov 23 05:10:56 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Nov 23 05:10:56 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Nov 23 05:10:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:73463559-de38-4e65-91fe-e256e1993ef1, vol_name:cephfs) < "" Nov 23 05:10:56 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "73463559-de38-4e65-91fe-e256e1993ef1", "auth_id": "david", "format": "json"}]: dispatch Nov 23 05:10:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:73463559-de38-4e65-91fe-e256e1993ef1, vol_name:cephfs) < "" Nov 23 05:10:56 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/73463559-de38-4e65-91fe-e256e1993ef1/5697482a-5566-4614-b3e6-cbfdb66450d2 Nov 23 05:10:56 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 23 05:10:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:73463559-de38-4e65-91fe-e256e1993ef1, vol_name:cephfs) < "" Nov 23 05:10:56 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Nov 23 05:10:56 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Nov 23 05:10:56 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Nov 23 05:10:56 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7d1e8c28-ca2d-4cc0-aed2-5196c93505b5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:10:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7d1e8c28-ca2d-4cc0-aed2-5196c93505b5, vol_name:cephfs) < "" Nov 23 05:10:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7d1e8c28-ca2d-4cc0-aed2-5196c93505b5/.meta.tmp' Nov 23 05:10:57 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7d1e8c28-ca2d-4cc0-aed2-5196c93505b5/.meta.tmp' to config b'/volumes/_nogroup/7d1e8c28-ca2d-4cc0-aed2-5196c93505b5/.meta' Nov 23 05:10:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7d1e8c28-ca2d-4cc0-aed2-5196c93505b5, vol_name:cephfs) < "" Nov 23 05:10:57 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7d1e8c28-ca2d-4cc0-aed2-5196c93505b5", "format": "json"}]: dispatch Nov 23 05:10:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7d1e8c28-ca2d-4cc0-aed2-5196c93505b5, vol_name:cephfs) < "" Nov 23 05:10:57 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7d1e8c28-ca2d-4cc0-aed2-5196c93505b5, vol_name:cephfs) < "" Nov 23 05:10:57 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:10:57 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:10:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v650: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 73 KiB/s wr, 3 op/s Nov 23 05:10:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:10:58 localhost nova_compute[280939]: 2025-11-23 10:10:58.445 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:10:59 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d73f5489-d8e5-493d-9400-04efa839bc7c", "format": "json"}]: dispatch Nov 23 05:10:59 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d73f5489-d8e5-493d-9400-04efa839bc7c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:59 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d73f5489-d8e5-493d-9400-04efa839bc7c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:10:59 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:10:59.632+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd73f5489-d8e5-493d-9400-04efa839bc7c' of type subvolume Nov 23 05:10:59 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd73f5489-d8e5-493d-9400-04efa839bc7c' of type subvolume Nov 23 05:10:59 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d73f5489-d8e5-493d-9400-04efa839bc7c", "force": true, "format": "json"}]: dispatch Nov 23 05:10:59 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d73f5489-d8e5-493d-9400-04efa839bc7c, vol_name:cephfs) < "" Nov 23 05:10:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v651: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 120 KiB/s wr, 6 op/s Nov 23 05:10:59 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d73f5489-d8e5-493d-9400-04efa839bc7c'' moved to trashcan Nov 23 05:10:59 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:10:59 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d73f5489-d8e5-493d-9400-04efa839bc7c, vol_name:cephfs) < "" Nov 23 05:10:59 localhost nova_compute[280939]: 2025-11-23 10:10:59.816 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:00 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7d1e8c28-ca2d-4cc0-aed2-5196c93505b5", "format": "json"}]: dispatch Nov 23 05:11:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7d1e8c28-ca2d-4cc0-aed2-5196c93505b5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7d1e8c28-ca2d-4cc0-aed2-5196c93505b5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:11:00.285+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7d1e8c28-ca2d-4cc0-aed2-5196c93505b5' of type subvolume Nov 23 05:11:00 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7d1e8c28-ca2d-4cc0-aed2-5196c93505b5' of type subvolume Nov 23 05:11:00 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7d1e8c28-ca2d-4cc0-aed2-5196c93505b5", "force": true, "format": "json"}]: dispatch Nov 23 05:11:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7d1e8c28-ca2d-4cc0-aed2-5196c93505b5, vol_name:cephfs) < "" Nov 23 05:11:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7d1e8c28-ca2d-4cc0-aed2-5196c93505b5'' moved to trashcan Nov 23 05:11:00 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:11:00 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7d1e8c28-ca2d-4cc0-aed2-5196c93505b5, vol_name:cephfs) < "" Nov 23 05:11:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v652: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 85 KiB/s wr, 4 op/s Nov 23 05:11:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:11:02 localhost podman[327412]: 2025-11-23 10:11:02.898590184 +0000 UTC m=+0.082512760 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:11:02 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b9ef5055-ce0d-4f29-b449-58f39c1f00af", "format": "json"}]: dispatch Nov 23 05:11:02 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:02 localhost podman[327412]: 2025-11-23 10:11:02.931879706 +0000 UTC m=+0.115802282 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:11:02 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:02 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:11:02.933+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b9ef5055-ce0d-4f29-b449-58f39c1f00af' of type subvolume Nov 23 05:11:02 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b9ef5055-ce0d-4f29-b449-58f39c1f00af' of type subvolume Nov 23 05:11:02 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b9ef5055-ce0d-4f29-b449-58f39c1f00af", "force": true, "format": "json"}]: dispatch Nov 23 05:11:02 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, vol_name:cephfs) < "" Nov 23 05:11:02 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:11:02 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b9ef5055-ce0d-4f29-b449-58f39c1f00af'' moved to trashcan Nov 23 05:11:02 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:11:02 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b9ef5055-ce0d-4f29-b449-58f39c1f00af, vol_name:cephfs) < "" Nov 23 05:11:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:11:03 localhost nova_compute[280939]: 2025-11-23 10:11:03.493 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:03 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f4b0408c-423f-4414-8b75-dc6199d36194", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:11:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f4b0408c-423f-4414-8b75-dc6199d36194, vol_name:cephfs) < "" Nov 23 05:11:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f4b0408c-423f-4414-8b75-dc6199d36194/.meta.tmp' Nov 23 05:11:03 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f4b0408c-423f-4414-8b75-dc6199d36194/.meta.tmp' to config b'/volumes/_nogroup/f4b0408c-423f-4414-8b75-dc6199d36194/.meta' Nov 23 05:11:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f4b0408c-423f-4414-8b75-dc6199d36194, vol_name:cephfs) < "" Nov 23 05:11:03 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f4b0408c-423f-4414-8b75-dc6199d36194", "format": "json"}]: dispatch Nov 23 05:11:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f4b0408c-423f-4414-8b75-dc6199d36194, vol_name:cephfs) < "" Nov 23 05:11:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v653: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 85 KiB/s wr, 4 op/s Nov 23 05:11:03 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f4b0408c-423f-4414-8b75-dc6199d36194, vol_name:cephfs) < "" Nov 23 05:11:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:11:03 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:11:04 localhost nova_compute[280939]: 2025-11-23 10:11:04.843 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v654: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 144 KiB/s wr, 7 op/s Nov 23 05:11:06 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "346654a1-8043-457a-94b6-3b076c21a1d5", "format": "json"}]: dispatch Nov 23 05:11:06 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:346654a1-8043-457a-94b6-3b076c21a1d5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:06 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:346654a1-8043-457a-94b6-3b076c21a1d5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:06 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '346654a1-8043-457a-94b6-3b076c21a1d5' of type subvolume Nov 23 05:11:06 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:11:06.245+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '346654a1-8043-457a-94b6-3b076c21a1d5' of type subvolume Nov 23 05:11:06 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "346654a1-8043-457a-94b6-3b076c21a1d5", "force": true, "format": "json"}]: dispatch Nov 23 05:11:06 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:346654a1-8043-457a-94b6-3b076c21a1d5, vol_name:cephfs) < "" Nov 23 05:11:06 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/346654a1-8043-457a-94b6-3b076c21a1d5'' moved to trashcan Nov 23 05:11:06 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:11:06 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:346654a1-8043-457a-94b6-3b076c21a1d5, vol_name:cephfs) < "" Nov 23 05:11:06 localhost openstack_network_exporter[241732]: ERROR 10:11:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:11:06 localhost openstack_network_exporter[241732]: ERROR 10:11:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:11:06 localhost openstack_network_exporter[241732]: ERROR 10:11:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:11:06 localhost openstack_network_exporter[241732]: ERROR 10:11:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:11:06 localhost openstack_network_exporter[241732]: Nov 23 05:11:06 localhost openstack_network_exporter[241732]: ERROR 10:11:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:11:06 localhost openstack_network_exporter[241732]: Nov 23 05:11:07 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f4b0408c-423f-4414-8b75-dc6199d36194", "format": "json"}]: dispatch Nov 23 05:11:07 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f4b0408c-423f-4414-8b75-dc6199d36194, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:07 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f4b0408c-423f-4414-8b75-dc6199d36194, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:07 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f4b0408c-423f-4414-8b75-dc6199d36194' of type subvolume Nov 23 05:11:07 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:11:07.085+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f4b0408c-423f-4414-8b75-dc6199d36194' of type subvolume Nov 23 05:11:07 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f4b0408c-423f-4414-8b75-dc6199d36194", "force": true, "format": "json"}]: dispatch Nov 23 05:11:07 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f4b0408c-423f-4414-8b75-dc6199d36194, vol_name:cephfs) < "" Nov 23 05:11:07 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f4b0408c-423f-4414-8b75-dc6199d36194'' moved to trashcan Nov 23 05:11:07 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:11:07 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f4b0408c-423f-4414-8b75-dc6199d36194, vol_name:cephfs) < "" Nov 23 05:11:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v655: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 106 KiB/s wr, 6 op/s Nov 23 05:11:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:11:08 localhost nova_compute[280939]: 2025-11-23 10:11:08.532 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:09 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "73463559-de38-4e65-91fe-e256e1993ef1", "auth_id": "admin", "format": "json"}]: dispatch Nov 23 05:11:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:73463559-de38-4e65-91fe-e256e1993ef1, vol_name:cephfs) < "" Nov 23 05:11:09 localhost ceph-mgr[286671]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist Nov 23 05:11:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:73463559-de38-4e65-91fe-e256e1993ef1, vol_name:cephfs) < "" Nov 23 05:11:09 localhost ceph-mgr[286671]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Nov 23 05:11:09 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:11:09.620+0000 7f9cc7b08640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Nov 23 05:11:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 938 B/s rd, 131 KiB/s wr, 8 op/s Nov 23 05:11:09 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "73463559-de38-4e65-91fe-e256e1993ef1", "format": "json"}]: dispatch Nov 23 05:11:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:73463559-de38-4e65-91fe-e256e1993ef1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:73463559-de38-4e65-91fe-e256e1993ef1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:09 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '73463559-de38-4e65-91fe-e256e1993ef1' of type subvolume Nov 23 05:11:09 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:11:09.700+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '73463559-de38-4e65-91fe-e256e1993ef1' of type subvolume Nov 23 05:11:09 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "73463559-de38-4e65-91fe-e256e1993ef1", "force": true, "format": "json"}]: dispatch Nov 23 05:11:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:73463559-de38-4e65-91fe-e256e1993ef1, vol_name:cephfs) < "" Nov 23 05:11:09 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/73463559-de38-4e65-91fe-e256e1993ef1'' moved to trashcan Nov 23 05:11:09 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:11:09 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:73463559-de38-4e65-91fe-e256e1993ef1, vol_name:cephfs) < "" Nov 23 05:11:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:11:09.751 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:11:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:11:09.752 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:11:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:11:09.752 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:11:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:11:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:11:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:11:09 localhost nova_compute[280939]: 2025-11-23 10:11:09.894 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:09 localhost systemd[1]: tmp-crun.JcQe1u.mount: Deactivated successfully. Nov 23 05:11:09 localhost podman[327431]: 2025-11-23 10:11:09.935111769 +0000 UTC m=+0.123269519 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118) Nov 23 05:11:09 localhost podman[327433]: 2025-11-23 10:11:09.950174345 +0000 UTC m=+0.127733350 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2) Nov 23 05:11:09 localhost podman[327431]: 2025-11-23 10:11:09.971235761 +0000 UTC m=+0.159393511 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true) Nov 23 05:11:09 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:11:10 localhost podman[327432]: 2025-11-23 10:11:10.045404476 +0000 UTC m=+0.224431498 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:11:10 localhost podman[327432]: 2025-11-23 10:11:10.06037928 +0000 UTC m=+0.239406282 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:11:10 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:11:10 localhost podman[327433]: 2025-11-23 10:11:10.07459812 +0000 UTC m=+0.252157155 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:11:10 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:11:10 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:11:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb, vol_name:cephfs) < "" Nov 23 05:11:10 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb/.meta.tmp' Nov 23 05:11:10 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb/.meta.tmp' to config b'/volumes/_nogroup/b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb/.meta' Nov 23 05:11:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb, vol_name:cephfs) < "" Nov 23 05:11:10 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb", "format": "json"}]: dispatch Nov 23 05:11:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb, vol_name:cephfs) < "" Nov 23 05:11:10 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb, vol_name:cephfs) < "" Nov 23 05:11:10 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:11:10 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:11:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v657: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 84 KiB/s wr, 5 op/s Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.580 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:11:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:11:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0. Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:11:13.512280) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70 Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892673512347, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1269, "num_deletes": 255, "total_data_size": 1274095, "memory_usage": 1301992, "flush_reason": "Manual Compaction"} Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892673522539, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 1008116, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38857, "largest_seqno": 40125, "table_properties": {"data_size": 1003172, "index_size": 2223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 14202, "raw_average_key_size": 21, "raw_value_size": 991949, "raw_average_value_size": 1528, "num_data_blocks": 98, "num_entries": 649, "num_filter_entries": 649, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892603, "oldest_key_time": 1763892603, "file_creation_time": 1763892673, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}} Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 10342 microseconds, and 5503 cpu microseconds. Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:11:13.522612) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 1008116 bytes OK Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:11:13.522653) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:11:13.525264) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:11:13.525287) EVENT_LOG_v1 {"time_micros": 1763892673525280, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:11:13.525322) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1268041, prev total WAL file size 1268365, number of live WAL files 2. Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:11:13.526122) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034323538' seq:72057594037927935, type:22 .. '6D6772737461740034353131' seq:0, type:0; will stop at (end) Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(984KB)], [69(17MB)] Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892673526184, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 19632763, "oldest_snapshot_seqno": -1} Nov 23 05:11:13 localhost nova_compute[280939]: 2025-11-23 10:11:13.565 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 14509 keys, 17739815 bytes, temperature: kUnknown Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892673613079, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 17739815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17657858, "index_size": 44608, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36293, "raw_key_size": 389250, "raw_average_key_size": 26, "raw_value_size": 17412210, "raw_average_value_size": 1200, "num_data_blocks": 1652, "num_entries": 14509, "num_filter_entries": 14509, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763892673, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}} Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:11:13.613462) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 17739815 bytes Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:11:13.615519) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.6 rd, 203.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 17.8 +0.0 blob) out(16.9 +0.0 blob), read-write-amplify(37.1) write-amplify(17.6) OK, records in: 15016, records dropped: 507 output_compression: NoCompression Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:11:13.615554) EVENT_LOG_v1 {"time_micros": 1763892673615541, "job": 42, "event": "compaction_finished", "compaction_time_micros": 87015, "compaction_time_cpu_micros": 55421, "output_level": 6, "num_output_files": 1, "total_output_size": 17739815, "num_input_records": 15016, "num_output_records": 14509, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892673615845, "job": 42, "event": "table_file_deletion", "file_number": 71} Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892673618934, "job": 42, "event": "table_file_deletion", "file_number": 69} Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:11:13.526010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:11:13.619039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:11:13.619045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:11:13.619048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:11:13.619051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:11:13 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:11:13.619054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:11:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v658: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 84 KiB/s wr, 5 op/s Nov 23 05:11:13 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb", "format": "json"}]: dispatch Nov 23 05:11:13 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:13 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:13 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb' of type subvolume Nov 23 05:11:13 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:11:13.960+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb' of type subvolume Nov 23 05:11:13 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb", "force": true, "format": "json"}]: dispatch Nov 23 05:11:13 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb, vol_name:cephfs) < "" Nov 23 05:11:13 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb'' moved to trashcan Nov 23 05:11:13 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:11:13 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b1a1fa7e-58c0-4be2-8fb7-cb23ab1000cb, vol_name:cephfs) < "" Nov 23 05:11:14 localhost ovn_metadata_agent[159410]: 2025-11-23 10:11:14.084 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:11:14 localhost nova_compute[280939]: 2025-11-23 10:11:14.085 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:14 localhost ovn_metadata_agent[159410]: 2025-11-23 10:11:14.086 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 05:11:14 localhost nova_compute[280939]: 2025-11-23 10:11:14.938 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v659: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 852 B/s rd, 131 KiB/s wr, 7 op/s Nov 23 05:11:15 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7d4a8f6e-3803-4134-a981-5fbbe44d23bd", "format": "json"}]: dispatch Nov 23 05:11:15 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7d4a8f6e-3803-4134-a981-5fbbe44d23bd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:16 localhost nova_compute[280939]: 2025-11-23 10:11:16.381 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:11:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:11:16 localhost podman[327496]: 2025-11-23 10:11:16.904941685 +0000 UTC m=+0.089404568 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git) Nov 23 05:11:16 localhost podman[327496]: 2025-11-23 10:11:16.924270236 +0000 UTC m=+0.108733089 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, vcs-type=git, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc.) Nov 23 05:11:16 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:11:17 localhost podman[239764]: time="2025-11-23T10:11:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:11:17 localhost podman[239764]: @ - - [23/Nov/2025:10:11:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:11:17 localhost nova_compute[280939]: 2025-11-23 10:11:17.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:11:17 localhost podman[239764]: @ - - [23/Nov/2025:10:11:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18779 "" "Go-http-client/1.1" Nov 23 05:11:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 72 KiB/s wr, 4 op/s Nov 23 05:11:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7d4a8f6e-3803-4134-a981-5fbbe44d23bd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:18 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7d4a8f6e-3803-4134-a981-5fbbe44d23bd", "format": "json"}]: dispatch Nov 23 05:11:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7d4a8f6e-3803-4134-a981-5fbbe44d23bd, vol_name:cephfs) < "" Nov 23 05:11:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7d4a8f6e-3803-4134-a981-5fbbe44d23bd, vol_name:cephfs) < "" Nov 23 05:11:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:11:18 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:11:18 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e5534047-ca04-4edc-8696-f70777010bec", "snap_name": "cde814c2-996b-4a4f-8643-46a807416ffa_53a101d9-305d-472f-aad0-e1fd6eb94453", "force": true, "format": "json"}]: dispatch Nov 23 05:11:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cde814c2-996b-4a4f-8643-46a807416ffa_53a101d9-305d-472f-aad0-e1fd6eb94453, sub_name:e5534047-ca04-4edc-8696-f70777010bec, vol_name:cephfs) < "" Nov 23 05:11:18 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e5534047-ca04-4edc-8696-f70777010bec/.meta.tmp' Nov 23 05:11:18 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e5534047-ca04-4edc-8696-f70777010bec/.meta.tmp' to config b'/volumes/_nogroup/e5534047-ca04-4edc-8696-f70777010bec/.meta' Nov 23 05:11:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cde814c2-996b-4a4f-8643-46a807416ffa_53a101d9-305d-472f-aad0-e1fd6eb94453, sub_name:e5534047-ca04-4edc-8696-f70777010bec, vol_name:cephfs) < "" Nov 23 05:11:18 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e5534047-ca04-4edc-8696-f70777010bec", "snap_name": "cde814c2-996b-4a4f-8643-46a807416ffa", "force": true, "format": "json"}]: dispatch Nov 23 05:11:18 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cde814c2-996b-4a4f-8643-46a807416ffa, sub_name:e5534047-ca04-4edc-8696-f70777010bec, vol_name:cephfs) < "" Nov 23 05:11:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:11:18 localhost nova_compute[280939]: 2025-11-23 10:11:18.583 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:19 localhost ovn_metadata_agent[159410]: 2025-11-23 10:11:19.088 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 05:11:19 localhost nova_compute[280939]: 2025-11-23 10:11:19.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:11:19 localhost nova_compute[280939]: 2025-11-23 10:11:19.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 05:11:19 localhost nova_compute[280939]: 2025-11-23 10:11:19.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 05:11:19 localhost nova_compute[280939]: 2025-11-23 10:11:19.150 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 05:11:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e5534047-ca04-4edc-8696-f70777010bec/.meta.tmp' Nov 23 05:11:19 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e5534047-ca04-4edc-8696-f70777010bec/.meta.tmp' to config b'/volumes/_nogroup/e5534047-ca04-4edc-8696-f70777010bec/.meta' Nov 23 05:11:19 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cde814c2-996b-4a4f-8643-46a807416ffa, sub_name:e5534047-ca04-4edc-8696-f70777010bec, vol_name:cephfs) < "" Nov 23 05:11:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v661: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 94 KiB/s wr, 5 op/s Nov 23 05:11:19 localhost nova_compute[280939]: 2025-11-23 10:11:19.965 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:20 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a8c9a605-348f-42c0-9962-a9e186065223", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:11:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a8c9a605-348f-42c0-9962-a9e186065223, vol_name:cephfs) < "" Nov 23 05:11:20 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a8c9a605-348f-42c0-9962-a9e186065223/.meta.tmp' Nov 23 05:11:20 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a8c9a605-348f-42c0-9962-a9e186065223/.meta.tmp' to config b'/volumes/_nogroup/a8c9a605-348f-42c0-9962-a9e186065223/.meta' Nov 23 05:11:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a8c9a605-348f-42c0-9962-a9e186065223, vol_name:cephfs) < "" Nov 23 05:11:20 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a8c9a605-348f-42c0-9962-a9e186065223", "format": "json"}]: dispatch Nov 23 05:11:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a8c9a605-348f-42c0-9962-a9e186065223, vol_name:cephfs) < "" Nov 23 05:11:20 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a8c9a605-348f-42c0-9962-a9e186065223, vol_name:cephfs) < "" Nov 23 05:11:20 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:11:20 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:11:21 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e5534047-ca04-4edc-8696-f70777010bec", "format": "json"}]: dispatch Nov 23 05:11:21 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e5534047-ca04-4edc-8696-f70777010bec, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:21 localhost nova_compute[280939]: 2025-11-23 10:11:21.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:11:21 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e5534047-ca04-4edc-8696-f70777010bec, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:21 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:11:21.133+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e5534047-ca04-4edc-8696-f70777010bec' of type subvolume Nov 23 05:11:21 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e5534047-ca04-4edc-8696-f70777010bec' of type subvolume Nov 23 05:11:21 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e5534047-ca04-4edc-8696-f70777010bec", "force": true, "format": "json"}]: dispatch Nov 23 05:11:21 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e5534047-ca04-4edc-8696-f70777010bec, vol_name:cephfs) < "" Nov 23 05:11:21 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e5534047-ca04-4edc-8696-f70777010bec'' moved to trashcan Nov 23 05:11:21 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:11:21 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e5534047-ca04-4edc-8696-f70777010bec, vol_name:cephfs) < "" Nov 23 05:11:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 68 KiB/s wr, 3 op/s Nov 23 05:11:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:11:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:11:21 localhost podman[327515]: 2025-11-23 10:11:21.915439523 +0000 UTC m=+0.097163143 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true) Nov 23 05:11:21 localhost podman[327515]: 2025-11-23 10:11:21.924734187 +0000 UTC m=+0.106457807 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:11:21 localhost systemd[1]: tmp-crun.FpsQnc.mount: Deactivated successfully. Nov 23 05:11:21 localhost podman[327516]: 2025-11-23 10:11:21.963977638 +0000 UTC m=+0.143442976 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 05:11:21 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:11:21 localhost podman[327516]: 2025-11-23 10:11:21.998018084 +0000 UTC m=+0.177483462 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:11:22 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:11:23 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "a8c9a605-348f-42c0-9962-a9e186065223", "new_size": 2147483648, "format": "json"}]: dispatch Nov 23 05:11:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:a8c9a605-348f-42c0-9962-a9e186065223, vol_name:cephfs) < "" Nov 23 05:11:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_10:11:23 Nov 23 05:11:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 05:11:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 05:11:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['images', 'manila_metadata', 'backups', 'volumes', '.mgr', 'vms', 'manila_data'] Nov 23 05:11:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 05:11:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e267 do_prune osdmap full prune enabled Nov 23 05:11:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e268 e268: 6 total, 6 up, 6 in Nov 23 05:11:23 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:a8c9a605-348f-42c0-9962-a9e186065223, vol_name:cephfs) < "" Nov 23 05:11:23 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e268: 6 total, 6 up, 6 in Nov 23 05:11:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:11:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:11:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:11:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:11:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:11:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Nov 23 05:11:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 23 05:11:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:11:23 localhost nova_compute[280939]: 2025-11-23 10:11:23.600 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 82 KiB/s wr, 4 op/s Nov 23 05:11:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 05:11:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:11:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 05:11:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:11:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Nov 23 05:11:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:11:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Nov 23 05:11:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:11:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 23 05:11:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:11:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 23 05:11:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:11:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.635783082077052e-06 of space, bias 1.0, pg target 0.0003255208333333333 quantized to 32 (current 32) Nov 23 05:11:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:11:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0024043285001395867 of space, bias 4.0, pg target 1.913845486111111 quantized to 16 (current 16) Nov 23 05:11:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 05:11:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:11:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 05:11:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:11:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:11:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:11:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:11:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:11:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:11:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:11:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e52: np0005532584.naxwxy(active, since 16m), standbys: np0005532585.gzafiw, np0005532586.thmvqb Nov 23 05:11:24 localhost nova_compute[280939]: 2025-11-23 10:11:24.990 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:25 localhost nova_compute[280939]: 2025-11-23 10:11:25.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:11:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 76 KiB/s wr, 5 op/s Nov 23 05:11:26 localhost nova_compute[280939]: 2025-11-23 10:11:26.129 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:11:26 localhost nova_compute[280939]: 2025-11-23 10:11:26.131 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:11:26 localhost nova_compute[280939]: 2025-11-23 10:11:26.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 05:11:26 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a8c9a605-348f-42c0-9962-a9e186065223", "format": "json"}]: dispatch Nov 23 05:11:26 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a8c9a605-348f-42c0-9962-a9e186065223, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:26 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a8c9a605-348f-42c0-9962-a9e186065223, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:26 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:11:26.678+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a8c9a605-348f-42c0-9962-a9e186065223' of type subvolume Nov 23 05:11:26 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a8c9a605-348f-42c0-9962-a9e186065223' of type subvolume Nov 23 05:11:26 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a8c9a605-348f-42c0-9962-a9e186065223", "force": true, "format": "json"}]: dispatch Nov 23 05:11:26 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a8c9a605-348f-42c0-9962-a9e186065223, vol_name:cephfs) < "" Nov 23 05:11:26 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a8c9a605-348f-42c0-9962-a9e186065223'' moved to trashcan Nov 23 05:11:26 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:11:26 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a8c9a605-348f-42c0-9962-a9e186065223, vol_name:cephfs) < "" Nov 23 05:11:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v666: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 76 KiB/s wr, 5 op/s Nov 23 05:11:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:11:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e268 do_prune osdmap full prune enabled Nov 23 05:11:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e269 e269: 6 total, 6 up, 6 in Nov 23 05:11:28 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e269: 6 total, 6 up, 6 in Nov 23 05:11:28 localhost nova_compute[280939]: 2025-11-23 10:11:28.646 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:29 localhost nova_compute[280939]: 2025-11-23 10:11:29.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:11:29 localhost nova_compute[280939]: 2025-11-23 10:11:29.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:11:29 localhost nova_compute[280939]: 2025-11-23 10:11:29.159 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:11:29 localhost nova_compute[280939]: 2025-11-23 10:11:29.159 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:11:29 localhost nova_compute[280939]: 2025-11-23 10:11:29.159 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:11:29 localhost nova_compute[280939]: 2025-11-23 10:11:29.160 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 05:11:29 localhost nova_compute[280939]: 2025-11-23 10:11:29.160 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:11:29 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:11:29 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1574676510' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:11:29 localhost nova_compute[280939]: 2025-11-23 10:11:29.612 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:11:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v668: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 95 KiB/s wr, 5 op/s Nov 23 05:11:29 localhost nova_compute[280939]: 2025-11-23 10:11:29.841 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 05:11:29 localhost nova_compute[280939]: 2025-11-23 10:11:29.843 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11383MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 05:11:29 localhost nova_compute[280939]: 2025-11-23 10:11:29.843 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:11:29 localhost nova_compute[280939]: 2025-11-23 10:11:29.844 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:11:29 localhost nova_compute[280939]: 2025-11-23 10:11:29.930 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 05:11:29 localhost nova_compute[280939]: 2025-11-23 10:11:29.930 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 05:11:29 localhost nova_compute[280939]: 2025-11-23 10:11:29.964 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:11:30 localhost nova_compute[280939]: 2025-11-23 10:11:30.023 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:11:30 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2856888675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:11:30 localhost nova_compute[280939]: 2025-11-23 10:11:30.439 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:11:30 localhost nova_compute[280939]: 2025-11-23 10:11:30.445 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 05:11:30 localhost nova_compute[280939]: 2025-11-23 10:11:30.461 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 05:11:30 localhost nova_compute[280939]: 2025-11-23 10:11:30.463 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 05:11:30 localhost nova_compute[280939]: 2025-11-23 10:11:30.463 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:11:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 740 B/s rd, 91 KiB/s wr, 5 op/s Nov 23 05:11:32 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0acfdcba-644a-4a19-9206-6fe311fbddf1", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:11:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:0acfdcba-644a-4a19-9206-6fe311fbddf1, vol_name:cephfs) < "" Nov 23 05:11:32 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0acfdcba-644a-4a19-9206-6fe311fbddf1/.meta.tmp' Nov 23 05:11:32 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0acfdcba-644a-4a19-9206-6fe311fbddf1/.meta.tmp' to config b'/volumes/_nogroup/0acfdcba-644a-4a19-9206-6fe311fbddf1/.meta' Nov 23 05:11:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:0acfdcba-644a-4a19-9206-6fe311fbddf1, vol_name:cephfs) < "" Nov 23 05:11:32 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0acfdcba-644a-4a19-9206-6fe311fbddf1", "format": "json"}]: dispatch Nov 23 05:11:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0acfdcba-644a-4a19-9206-6fe311fbddf1, vol_name:cephfs) < "" Nov 23 05:11:32 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0acfdcba-644a-4a19-9206-6fe311fbddf1, vol_name:cephfs) < "" Nov 23 05:11:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:11:32 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:11:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:11:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v670: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 76 KiB/s wr, 4 op/s Nov 23 05:11:33 localhost nova_compute[280939]: 2025-11-23 10:11:33.685 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:11:33 localhost podman[327604]: 2025-11-23 10:11:33.902374177 +0000 UTC m=+0.089020876 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:11:33 localhost podman[327604]: 2025-11-23 10:11:33.9068993 +0000 UTC m=+0.093545979 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:11:33 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:11:34 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5409ddfd-477f-40a3-98e6-b831b5ca3a1a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:11:34 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5409ddfd-477f-40a3-98e6-b831b5ca3a1a, vol_name:cephfs) < "" Nov 23 05:11:34 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5409ddfd-477f-40a3-98e6-b831b5ca3a1a/.meta.tmp' Nov 23 05:11:34 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5409ddfd-477f-40a3-98e6-b831b5ca3a1a/.meta.tmp' to config b'/volumes/_nogroup/5409ddfd-477f-40a3-98e6-b831b5ca3a1a/.meta' Nov 23 05:11:34 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5409ddfd-477f-40a3-98e6-b831b5ca3a1a, vol_name:cephfs) < "" Nov 23 05:11:34 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5409ddfd-477f-40a3-98e6-b831b5ca3a1a", "format": "json"}]: dispatch Nov 23 05:11:34 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5409ddfd-477f-40a3-98e6-b831b5ca3a1a, vol_name:cephfs) < "" Nov 23 05:11:34 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5409ddfd-477f-40a3-98e6-b831b5ca3a1a, vol_name:cephfs) < "" Nov 23 05:11:34 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:11:34 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:11:35 localhost nova_compute[280939]: 2025-11-23 10:11:35.055 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 54 KiB/s wr, 2 op/s Nov 23 05:11:35 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "0acfdcba-644a-4a19-9206-6fe311fbddf1", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch Nov 23 05:11:35 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:0acfdcba-644a-4a19-9206-6fe311fbddf1, vol_name:cephfs) < "" Nov 23 05:11:35 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:0acfdcba-644a-4a19-9206-6fe311fbddf1, vol_name:cephfs) < "" Nov 23 05:11:36 localhost openstack_network_exporter[241732]: ERROR 10:11:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:11:36 localhost openstack_network_exporter[241732]: ERROR 10:11:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:11:36 localhost openstack_network_exporter[241732]: ERROR 10:11:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:11:36 localhost openstack_network_exporter[241732]: ERROR 10:11:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:11:36 localhost openstack_network_exporter[241732]: Nov 23 05:11:36 localhost openstack_network_exporter[241732]: ERROR 10:11:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:11:36 localhost openstack_network_exporter[241732]: Nov 23 05:11:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v672: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 54 KiB/s wr, 2 op/s Nov 23 05:11:37 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5409ddfd-477f-40a3-98e6-b831b5ca3a1a", "snap_name": "59e67742-1f77-44c8-b4a2-e315dfd41df9", "format": "json"}]: dispatch Nov 23 05:11:37 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:59e67742-1f77-44c8-b4a2-e315dfd41df9, sub_name:5409ddfd-477f-40a3-98e6-b831b5ca3a1a, vol_name:cephfs) < "" Nov 23 05:11:38 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:59e67742-1f77-44c8-b4a2-e315dfd41df9, sub_name:5409ddfd-477f-40a3-98e6-b831b5ca3a1a, vol_name:cephfs) < "" Nov 23 05:11:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:11:38 localhost nova_compute[280939]: 2025-11-23 10:11:38.740 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:39 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0acfdcba-644a-4a19-9206-6fe311fbddf1", "format": "json"}]: dispatch Nov 23 05:11:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0acfdcba-644a-4a19-9206-6fe311fbddf1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0acfdcba-644a-4a19-9206-6fe311fbddf1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:39 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0acfdcba-644a-4a19-9206-6fe311fbddf1' of type subvolume Nov 23 05:11:39 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:11:39.252+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0acfdcba-644a-4a19-9206-6fe311fbddf1' of type subvolume Nov 23 05:11:39 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0acfdcba-644a-4a19-9206-6fe311fbddf1", "force": true, "format": "json"}]: dispatch Nov 23 05:11:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0acfdcba-644a-4a19-9206-6fe311fbddf1, vol_name:cephfs) < "" Nov 23 05:11:39 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0acfdcba-644a-4a19-9206-6fe311fbddf1'' moved to trashcan Nov 23 05:11:39 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:11:39 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0acfdcba-644a-4a19-9206-6fe311fbddf1, vol_name:cephfs) < "" Nov 23 05:11:39 localhost nova_compute[280939]: 2025-11-23 10:11:39.460 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:11:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 183 B/s rd, 92 KiB/s wr, 4 op/s Nov 23 05:11:40 localhost nova_compute[280939]: 2025-11-23 10:11:40.102 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:11:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:11:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:11:40 localhost podman[327622]: 2025-11-23 10:11:40.909065419 +0000 UTC m=+0.086806786 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 23 05:11:40 localhost podman[327622]: 2025-11-23 10:11:40.92046804 +0000 UTC m=+0.098209397 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:11:40 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:11:41 localhost systemd[1]: tmp-crun.rEbmPZ.mount: Deactivated successfully. Nov 23 05:11:41 localhost podman[327624]: 2025-11-23 10:11:41.023142876 +0000 UTC m=+0.195572555 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller) Nov 23 05:11:41 localhost podman[327624]: 2025-11-23 10:11:41.060363904 +0000 UTC m=+0.232793573 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 23 05:11:41 localhost podman[327623]: 2025-11-23 10:11:41.067765207 +0000 UTC m=+0.243290684 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:11:41 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:11:41 localhost podman[327623]: 2025-11-23 10:11:41.102548618 +0000 UTC m=+0.278074145 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:11:41 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:11:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v674: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 64 KiB/s wr, 2 op/s Nov 23 05:11:42 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5409ddfd-477f-40a3-98e6-b831b5ca3a1a", "snap_name": "59e67742-1f77-44c8-b4a2-e315dfd41df9_a9330eec-98f1-4d62-96d0-b42e1209eee2", "force": true, "format": "json"}]: dispatch Nov 23 05:11:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:59e67742-1f77-44c8-b4a2-e315dfd41df9_a9330eec-98f1-4d62-96d0-b42e1209eee2, sub_name:5409ddfd-477f-40a3-98e6-b831b5ca3a1a, vol_name:cephfs) < "" Nov 23 05:11:42 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5409ddfd-477f-40a3-98e6-b831b5ca3a1a/.meta.tmp' Nov 23 05:11:42 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5409ddfd-477f-40a3-98e6-b831b5ca3a1a/.meta.tmp' to config b'/volumes/_nogroup/5409ddfd-477f-40a3-98e6-b831b5ca3a1a/.meta' Nov 23 05:11:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:59e67742-1f77-44c8-b4a2-e315dfd41df9_a9330eec-98f1-4d62-96d0-b42e1209eee2, sub_name:5409ddfd-477f-40a3-98e6-b831b5ca3a1a, vol_name:cephfs) < "" Nov 23 05:11:42 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5409ddfd-477f-40a3-98e6-b831b5ca3a1a", "snap_name": "59e67742-1f77-44c8-b4a2-e315dfd41df9", "force": true, "format": "json"}]: dispatch Nov 23 05:11:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:59e67742-1f77-44c8-b4a2-e315dfd41df9, sub_name:5409ddfd-477f-40a3-98e6-b831b5ca3a1a, vol_name:cephfs) < "" Nov 23 05:11:42 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5409ddfd-477f-40a3-98e6-b831b5ca3a1a/.meta.tmp' Nov 23 05:11:42 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5409ddfd-477f-40a3-98e6-b831b5ca3a1a/.meta.tmp' to config b'/volumes/_nogroup/5409ddfd-477f-40a3-98e6-b831b5ca3a1a/.meta' Nov 23 05:11:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:59e67742-1f77-44c8-b4a2-e315dfd41df9, sub_name:5409ddfd-477f-40a3-98e6-b831b5ca3a1a, vol_name:cephfs) < "" Nov 23 05:11:42 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "05500d0b-b770-4622-aeb8-7264743beada", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 23 05:11:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:05500d0b-b770-4622-aeb8-7264743beada, vol_name:cephfs) < "" Nov 23 05:11:42 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/05500d0b-b770-4622-aeb8-7264743beada/.meta.tmp' Nov 23 05:11:42 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/05500d0b-b770-4622-aeb8-7264743beada/.meta.tmp' to config b'/volumes/_nogroup/05500d0b-b770-4622-aeb8-7264743beada/.meta' Nov 23 05:11:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:05500d0b-b770-4622-aeb8-7264743beada, vol_name:cephfs) < "" Nov 23 05:11:42 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "05500d0b-b770-4622-aeb8-7264743beada", "format": "json"}]: dispatch Nov 23 05:11:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:05500d0b-b770-4622-aeb8-7264743beada, vol_name:cephfs) < "" Nov 23 05:11:42 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:05500d0b-b770-4622-aeb8-7264743beada, vol_name:cephfs) < "" Nov 23 05:11:42 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 23 05:11:42 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.25577 172.18.0.34:0/4122010736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 23 05:11:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:11:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e269 do_prune osdmap full prune enabled Nov 23 05:11:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e270 e270: 6 total, 6 up, 6 in Nov 23 05:11:43 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e270: 6 total, 6 up, 6 in Nov 23 05:11:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 76 KiB/s wr, 3 op/s Nov 23 05:11:43 localhost nova_compute[280939]: 2025-11-23 10:11:43.774 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:45 localhost nova_compute[280939]: 2025-11-23 10:11:45.144 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:45 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5409ddfd-477f-40a3-98e6-b831b5ca3a1a", "format": "json"}]: dispatch Nov 23 05:11:45 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5409ddfd-477f-40a3-98e6-b831b5ca3a1a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:45 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5409ddfd-477f-40a3-98e6-b831b5ca3a1a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:45 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:11:45.355+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5409ddfd-477f-40a3-98e6-b831b5ca3a1a' of type subvolume Nov 23 05:11:45 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5409ddfd-477f-40a3-98e6-b831b5ca3a1a' of type subvolume Nov 23 05:11:45 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5409ddfd-477f-40a3-98e6-b831b5ca3a1a", "force": true, "format": "json"}]: dispatch Nov 23 05:11:45 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5409ddfd-477f-40a3-98e6-b831b5ca3a1a, vol_name:cephfs) < "" Nov 23 05:11:45 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5409ddfd-477f-40a3-98e6-b831b5ca3a1a'' moved to trashcan Nov 23 05:11:45 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:11:45 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5409ddfd-477f-40a3-98e6-b831b5ca3a1a, vol_name:cephfs) < "" Nov 23 05:11:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v677: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 83 KiB/s wr, 5 op/s Nov 23 05:11:46 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "05500d0b-b770-4622-aeb8-7264743beada", "format": "json"}]: dispatch Nov 23 05:11:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:05500d0b-b770-4622-aeb8-7264743beada, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:05500d0b-b770-4622-aeb8-7264743beada, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:46 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:11:46.667+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '05500d0b-b770-4622-aeb8-7264743beada' of type subvolume Nov 23 05:11:46 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '05500d0b-b770-4622-aeb8-7264743beada' of type subvolume Nov 23 05:11:46 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "05500d0b-b770-4622-aeb8-7264743beada", "force": true, "format": "json"}]: dispatch Nov 23 05:11:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:05500d0b-b770-4622-aeb8-7264743beada, vol_name:cephfs) < "" Nov 23 05:11:46 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/05500d0b-b770-4622-aeb8-7264743beada'' moved to trashcan Nov 23 05:11:46 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:11:46 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:05500d0b-b770-4622-aeb8-7264743beada, vol_name:cephfs) < "" Nov 23 05:11:47 localhost podman[239764]: time="2025-11-23T10:11:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:11:47 localhost podman[239764]: @ - - [23/Nov/2025:10:11:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:11:47 localhost podman[239764]: @ - - [23/Nov/2025:10:11:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18775 "" "Go-http-client/1.1" Nov 23 05:11:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 83 KiB/s wr, 5 op/s Nov 23 05:11:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:11:47 localhost podman[327688]: 2025-11-23 10:11:47.880553526 +0000 UTC m=+0.067722772 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container) Nov 23 05:11:47 localhost podman[327688]: 2025-11-23 10:11:47.921426839 +0000 UTC m=+0.108596115 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-type=git, io.openshift.tags=minimal rhel9) Nov 23 05:11:47 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:11:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:11:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e270 do_prune osdmap full prune enabled Nov 23 05:11:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e271 e271: 6 total, 6 up, 6 in Nov 23 05:11:48 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e271: 6 total, 6 up, 6 in Nov 23 05:11:48 localhost nova_compute[280939]: 2025-11-23 10:11:48.817 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 106 KiB/s wr, 7 op/s Nov 23 05:11:49 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7d4a8f6e-3803-4134-a981-5fbbe44d23bd", "format": "json"}]: dispatch Nov 23 05:11:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7d4a8f6e-3803-4134-a981-5fbbe44d23bd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7d4a8f6e-3803-4134-a981-5fbbe44d23bd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:49 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7d4a8f6e-3803-4134-a981-5fbbe44d23bd", "force": true, "format": "json"}]: dispatch Nov 23 05:11:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7d4a8f6e-3803-4134-a981-5fbbe44d23bd, vol_name:cephfs) < "" Nov 23 05:11:49 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7d4a8f6e-3803-4134-a981-5fbbe44d23bd'' moved to trashcan Nov 23 05:11:49 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:11:49 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7d4a8f6e-3803-4134-a981-5fbbe44d23bd, vol_name:cephfs) < "" Nov 23 05:11:50 localhost nova_compute[280939]: 2025-11-23 10:11:50.174 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v681: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1018 B/s rd, 106 KiB/s wr, 7 op/s Nov 23 05:11:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:11:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:11:52 localhost systemd[1]: tmp-crun.83RzBT.mount: Deactivated successfully. Nov 23 05:11:52 localhost podman[327725]: 2025-11-23 10:11:52.315932561 +0000 UTC m=+0.096116721 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 23 05:11:52 localhost podman[327725]: 2025-11-23 10:11:52.325314238 +0000 UTC m=+0.105498378 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm) Nov 23 05:11:52 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:11:52 localhost systemd[1]: tmp-crun.phv23Q.mount: Deactivated successfully. Nov 23 05:11:52 localhost podman[327726]: 2025-11-23 10:11:52.432315921 +0000 UTC m=+0.209005620 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 23 05:11:52 localhost podman[327726]: 2025-11-23 10:11:52.441375187 +0000 UTC m=+0.218064896 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:11:52 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:11:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 05:11:53 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 05:11:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 05:11:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:11:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 05:11:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:11:53 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev ca303074-a654-41b1-8f02-a0ab54b8d0a2 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:11:53 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev ca303074-a654-41b1-8f02-a0ab54b8d0a2 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:11:53 localhost ceph-mgr[286671]: [progress INFO root] Completed event ca303074-a654-41b1-8f02-a0ab54b8d0a2 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 05:11:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 05:11:53 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 05:11:53 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "97e78d12-95be-4f55-b739-9cd5b5c20fd4", "snap_name": "4827a0db-6fdf-413e-96d9-4e5308156408_7604d04a-aeec-4e7c-8cb1-106f4bdc4a0a", "force": true, "format": "json"}]: dispatch Nov 23 05:11:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4827a0db-6fdf-413e-96d9-4e5308156408_7604d04a-aeec-4e7c-8cb1-106f4bdc4a0a, sub_name:97e78d12-95be-4f55-b739-9cd5b5c20fd4, vol_name:cephfs) < "" Nov 23 05:11:53 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4/.meta.tmp' Nov 23 05:11:53 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4/.meta.tmp' to config b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4/.meta' Nov 23 05:11:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4827a0db-6fdf-413e-96d9-4e5308156408_7604d04a-aeec-4e7c-8cb1-106f4bdc4a0a, sub_name:97e78d12-95be-4f55-b739-9cd5b5c20fd4, vol_name:cephfs) < "" Nov 23 05:11:53 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "97e78d12-95be-4f55-b739-9cd5b5c20fd4", "snap_name": "4827a0db-6fdf-413e-96d9-4e5308156408", "force": true, "format": "json"}]: dispatch Nov 23 05:11:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4827a0db-6fdf-413e-96d9-4e5308156408, sub_name:97e78d12-95be-4f55-b739-9cd5b5c20fd4, vol_name:cephfs) < "" Nov 23 05:11:53 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4/.meta.tmp' Nov 23 05:11:53 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4/.meta.tmp' to config b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4/.meta' Nov 23 05:11:53 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4827a0db-6fdf-413e-96d9-4e5308156408, sub_name:97e78d12-95be-4f55-b739-9cd5b5c20fd4, vol_name:cephfs) < "" Nov 23 05:11:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:11:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:11:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:11:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:11:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:11:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Nov 23 05:11:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 23 05:11:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:11:53 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:11:53 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:11:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 85 KiB/s wr, 6 op/s Nov 23 05:11:53 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 05:11:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 05:11:53 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:11:53 localhost nova_compute[280939]: 2025-11-23 10:11:53.838 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:54 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:11:55 localhost nova_compute[280939]: 2025-11-23 10:11:55.208 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 86 KiB/s wr, 5 op/s Nov 23 05:11:56 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:11:56.301 262301 INFO neutron.agent.linux.ip_lib [None req-e8c339c3-c260-42a8-8040-784bb56ea421 - - - - - -] Device tap63a77f2c-22 cannot be used as it has no MAC address#033[00m Nov 23 05:11:56 localhost nova_compute[280939]: 2025-11-23 10:11:56.358 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:56 localhost kernel: device tap63a77f2c-22 entered promiscuous mode Nov 23 05:11:56 localhost NetworkManager[5966]: [1763892716.3667] manager: (tap63a77f2c-22): new Generic device (/org/freedesktop/NetworkManager/Devices/59) Nov 23 05:11:56 localhost ovn_controller[153771]: 2025-11-23T10:11:56Z|00367|binding|INFO|Claiming lport 63a77f2c-221e-4f0c-b597-18930ae41adc for this chassis. Nov 23 05:11:56 localhost nova_compute[280939]: 2025-11-23 10:11:56.366 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:56 localhost ovn_controller[153771]: 2025-11-23T10:11:56Z|00368|binding|INFO|63a77f2c-221e-4f0c-b597-18930ae41adc: Claiming unknown Nov 23 05:11:56 localhost systemd-udevd[327846]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:11:56 localhost ovn_metadata_agent[159410]: 2025-11-23 10:11:56.379 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-c387ee99-0478-4ba5-ad6b-f5b7e502389e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c387ee99-0478-4ba5-ad6b-f5b7e502389e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4df21a31ac7e4292b0bbda8819ee47c0', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cd856a6-134f-4507-98ec-2915cf9fe691, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=63a77f2c-221e-4f0c-b597-18930ae41adc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:11:56 localhost ovn_metadata_agent[159410]: 2025-11-23 10:11:56.382 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 63a77f2c-221e-4f0c-b597-18930ae41adc in datapath c387ee99-0478-4ba5-ad6b-f5b7e502389e bound to our chassis#033[00m Nov 23 05:11:56 localhost ovn_metadata_agent[159410]: 2025-11-23 10:11:56.383 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port e2f45e76-c37a-4506-ac10-f35f3043ffa5 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 05:11:56 localhost ovn_metadata_agent[159410]: 2025-11-23 10:11:56.384 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c387ee99-0478-4ba5-ad6b-f5b7e502389e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:11:56 localhost nova_compute[280939]: 2025-11-23 10:11:56.387 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:56 localhost ovn_metadata_agent[159410]: 2025-11-23 10:11:56.387 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[c91a52a8-1621-4db7-90a4-54f0142af0a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:11:56 localhost journal[229336]: ethtool ioctl error on tap63a77f2c-22: No such device Nov 23 05:11:56 localhost ovn_controller[153771]: 2025-11-23T10:11:56Z|00369|binding|INFO|Setting lport 63a77f2c-221e-4f0c-b597-18930ae41adc ovn-installed in OVS Nov 23 05:11:56 localhost ovn_controller[153771]: 2025-11-23T10:11:56Z|00370|binding|INFO|Setting lport 63a77f2c-221e-4f0c-b597-18930ae41adc up in Southbound Nov 23 05:11:56 localhost journal[229336]: ethtool ioctl error on tap63a77f2c-22: No such device Nov 23 05:11:56 localhost nova_compute[280939]: 2025-11-23 10:11:56.416 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:56 localhost nova_compute[280939]: 2025-11-23 10:11:56.417 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:56 localhost journal[229336]: ethtool ioctl error on tap63a77f2c-22: No such device Nov 23 05:11:56 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "97e78d12-95be-4f55-b739-9cd5b5c20fd4", "format": "json"}]: dispatch Nov 23 05:11:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:97e78d12-95be-4f55-b739-9cd5b5c20fd4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:56 localhost journal[229336]: ethtool ioctl error on tap63a77f2c-22: No such device Nov 23 05:11:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:97e78d12-95be-4f55-b739-9cd5b5c20fd4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 23 05:11:56 localhost ceph-mgr[286671]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '97e78d12-95be-4f55-b739-9cd5b5c20fd4' of type subvolume Nov 23 05:11:56 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532584-naxwxy[286667]: 2025-11-23T10:11:56.426+0000 7f9cc7b08640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '97e78d12-95be-4f55-b739-9cd5b5c20fd4' of type subvolume Nov 23 05:11:56 localhost journal[229336]: ethtool ioctl error on tap63a77f2c-22: No such device Nov 23 05:11:56 localhost ceph-mgr[286671]: log_channel(audit) log [DBG] : from='client.25577 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "97e78d12-95be-4f55-b739-9cd5b5c20fd4", "force": true, "format": "json"}]: dispatch Nov 23 05:11:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:97e78d12-95be-4f55-b739-9cd5b5c20fd4, vol_name:cephfs) < "" Nov 23 05:11:56 localhost journal[229336]: ethtool ioctl error on tap63a77f2c-22: No such device Nov 23 05:11:56 localhost journal[229336]: ethtool ioctl error on tap63a77f2c-22: No such device Nov 23 05:11:56 localhost nova_compute[280939]: 2025-11-23 10:11:56.447 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:56 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/97e78d12-95be-4f55-b739-9cd5b5c20fd4'' moved to trashcan Nov 23 05:11:56 localhost journal[229336]: ethtool ioctl error on tap63a77f2c-22: No such device Nov 23 05:11:56 localhost ceph-mgr[286671]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 23 05:11:56 localhost ceph-mgr[286671]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:97e78d12-95be-4f55-b739-9cd5b5c20fd4, vol_name:cephfs) < "" Nov 23 05:11:56 localhost nova_compute[280939]: 2025-11-23 10:11:56.475 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:56 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e53: np0005532584.naxwxy(active, since 16m), standbys: np0005532585.gzafiw, np0005532586.thmvqb Nov 23 05:11:57 localhost podman[327918]: Nov 23 05:11:57 localhost podman[327918]: 2025-11-23 10:11:57.298053433 +0000 UTC m=+0.079652170 container create ca17e6c4af423cea02c98190eaf77a2ff64b31afb0194089e51010223be2a81f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c387ee99-0478-4ba5-ad6b-f5b7e502389e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118) Nov 23 05:11:57 localhost systemd[1]: Started libpod-conmon-ca17e6c4af423cea02c98190eaf77a2ff64b31afb0194089e51010223be2a81f.scope. Nov 23 05:11:57 localhost podman[327918]: 2025-11-23 10:11:57.251427229 +0000 UTC m=+0.033025946 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:11:57 localhost systemd[1]: Started libcrun container. Nov 23 05:11:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8b46b75b99b1ebedfdf9078adb6f4689462ca819a45fefe13a233ad488e261/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:11:57 localhost podman[327918]: 2025-11-23 10:11:57.376347009 +0000 UTC m=+0.157945716 container init ca17e6c4af423cea02c98190eaf77a2ff64b31afb0194089e51010223be2a81f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c387ee99-0478-4ba5-ad6b-f5b7e502389e, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 23 05:11:57 localhost podman[327918]: 2025-11-23 10:11:57.388528774 +0000 UTC m=+0.170127491 container start ca17e6c4af423cea02c98190eaf77a2ff64b31afb0194089e51010223be2a81f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c387ee99-0478-4ba5-ad6b-f5b7e502389e, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Nov 23 05:11:57 localhost dnsmasq[327936]: started, version 2.85 cachesize 150 Nov 23 05:11:57 localhost dnsmasq[327936]: DNS service limited to local subnets Nov 23 05:11:57 localhost dnsmasq[327936]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:11:57 localhost dnsmasq[327936]: warning: no upstream servers configured Nov 23 05:11:57 localhost dnsmasq-dhcp[327936]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:11:57 localhost dnsmasq[327936]: read /var/lib/neutron/dhcp/c387ee99-0478-4ba5-ad6b-f5b7e502389e/addn_hosts - 0 addresses Nov 23 05:11:57 localhost dnsmasq-dhcp[327936]: read /var/lib/neutron/dhcp/c387ee99-0478-4ba5-ad6b-f5b7e502389e/host Nov 23 05:11:57 localhost dnsmasq-dhcp[327936]: read /var/lib/neutron/dhcp/c387ee99-0478-4ba5-ad6b-f5b7e502389e/opts Nov 23 05:11:57 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:11:57.564 262301 INFO neutron.agent.dhcp.agent [None req-ea233ee3-820c-403c-ad59-4f0c6b5c4fb2 - - - - - -] DHCP configuration for ports {'dfe91f52-634f-494a-ba00-de47761f3342'} is completed#033[00m Nov 23 05:11:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 86 KiB/s wr, 5 op/s Nov 23 05:11:57 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:11:57.799 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:11:57Z, description=, device_id=8d1ecd33-a29e-452b-8902-8028e4b524ad, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fe1754ef-af6d-45f4-a660-a9c819056fc8, ip_allocation=immediate, mac_address=fa:16:3e:e6:9b:91, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:11:54Z, description=, dns_domain=, id=c387ee99-0478-4ba5-ad6b-f5b7e502389e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIMysqlTest-1370457155-network, port_security_enabled=True, project_id=4df21a31ac7e4292b0bbda8819ee47c0, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=699, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3907, status=ACTIVE, subnets=['90bffb76-1c7e-482b-a570-9619767e7fe9'], tags=[], tenant_id=4df21a31ac7e4292b0bbda8819ee47c0, updated_at=2025-11-23T10:11:54Z, vlan_transparent=None, network_id=c387ee99-0478-4ba5-ad6b-f5b7e502389e, port_security_enabled=False, project_id=4df21a31ac7e4292b0bbda8819ee47c0, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3915, status=DOWN, tags=[], tenant_id=4df21a31ac7e4292b0bbda8819ee47c0, updated_at=2025-11-23T10:11:57Z on network c387ee99-0478-4ba5-ad6b-f5b7e502389e#033[00m Nov 23 05:11:58 localhost dnsmasq[327936]: read /var/lib/neutron/dhcp/c387ee99-0478-4ba5-ad6b-f5b7e502389e/addn_hosts - 1 addresses Nov 23 05:11:58 localhost podman[327954]: 2025-11-23 10:11:58.008141707 +0000 UTC m=+0.056242610 container kill ca17e6c4af423cea02c98190eaf77a2ff64b31afb0194089e51010223be2a81f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c387ee99-0478-4ba5-ad6b-f5b7e502389e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 23 05:11:58 localhost dnsmasq-dhcp[327936]: read /var/lib/neutron/dhcp/c387ee99-0478-4ba5-ad6b-f5b7e502389e/host Nov 23 05:11:58 localhost dnsmasq-dhcp[327936]: read /var/lib/neutron/dhcp/c387ee99-0478-4ba5-ad6b-f5b7e502389e/opts Nov 23 05:11:58 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:11:58.239 262301 INFO neutron.agent.dhcp.agent [None req-2c1694c1-9f28-4e91-bf4f-677821d719dc - - - - - -] DHCP configuration for ports {'fe1754ef-af6d-45f4-a660-a9c819056fc8'} is completed#033[00m Nov 23 05:11:58 localhost systemd[1]: tmp-crun.J4NbqB.mount: Deactivated successfully. Nov 23 05:11:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:11:58 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:11:58.571 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:11:57Z, description=, device_id=8d1ecd33-a29e-452b-8902-8028e4b524ad, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fe1754ef-af6d-45f4-a660-a9c819056fc8, ip_allocation=immediate, mac_address=fa:16:3e:e6:9b:91, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:11:54Z, description=, dns_domain=, id=c387ee99-0478-4ba5-ad6b-f5b7e502389e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIMysqlTest-1370457155-network, port_security_enabled=True, project_id=4df21a31ac7e4292b0bbda8819ee47c0, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=699, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3907, status=ACTIVE, subnets=['90bffb76-1c7e-482b-a570-9619767e7fe9'], tags=[], tenant_id=4df21a31ac7e4292b0bbda8819ee47c0, updated_at=2025-11-23T10:11:54Z, vlan_transparent=None, network_id=c387ee99-0478-4ba5-ad6b-f5b7e502389e, port_security_enabled=False, project_id=4df21a31ac7e4292b0bbda8819ee47c0, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3915, status=DOWN, tags=[], tenant_id=4df21a31ac7e4292b0bbda8819ee47c0, updated_at=2025-11-23T10:11:57Z on network c387ee99-0478-4ba5-ad6b-f5b7e502389e#033[00m Nov 23 05:11:58 localhost dnsmasq[327936]: read /var/lib/neutron/dhcp/c387ee99-0478-4ba5-ad6b-f5b7e502389e/addn_hosts - 1 addresses Nov 23 05:11:58 localhost podman[327991]: 2025-11-23 10:11:58.788676969 +0000 UTC m=+0.065862725 container kill ca17e6c4af423cea02c98190eaf77a2ff64b31afb0194089e51010223be2a81f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c387ee99-0478-4ba5-ad6b-f5b7e502389e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Nov 23 05:11:58 localhost dnsmasq-dhcp[327936]: read /var/lib/neutron/dhcp/c387ee99-0478-4ba5-ad6b-f5b7e502389e/host Nov 23 05:11:58 localhost dnsmasq-dhcp[327936]: read /var/lib/neutron/dhcp/c387ee99-0478-4ba5-ad6b-f5b7e502389e/opts Nov 23 05:11:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e271 do_prune osdmap full prune enabled Nov 23 05:11:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e272 e272: 6 total, 6 up, 6 in Nov 23 05:11:58 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e272: 6 total, 6 up, 6 in Nov 23 05:11:58 localhost nova_compute[280939]: 2025-11-23 10:11:58.871 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:59 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:11:59.106 262301 INFO neutron.agent.dhcp.agent [None req-d7c78b8e-281b-43b8-a1cb-45dc616627cb - - - - - -] DHCP configuration for ports {'fe1754ef-af6d-45f4-a660-a9c819056fc8'} is completed#033[00m Nov 23 05:11:59 localhost ovn_controller[153771]: 2025-11-23T10:11:59Z|00371|ovn_bfd|INFO|Enabled BFD on interface ovn-c237ed-0 Nov 23 05:11:59 localhost ovn_controller[153771]: 2025-11-23T10:11:59Z|00372|ovn_bfd|INFO|Enabled BFD on interface ovn-49b8a0-0 Nov 23 05:11:59 localhost ovn_controller[153771]: 2025-11-23T10:11:59Z|00373|ovn_bfd|INFO|Enabled BFD on interface ovn-b7d5b3-0 Nov 23 05:11:59 localhost nova_compute[280939]: 2025-11-23 10:11:59.274 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:59 localhost nova_compute[280939]: 2025-11-23 10:11:59.288 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:59 localhost nova_compute[280939]: 2025-11-23 10:11:59.293 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:59 localhost nova_compute[280939]: 2025-11-23 10:11:59.297 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:59 localhost nova_compute[280939]: 2025-11-23 10:11:59.314 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:59 localhost nova_compute[280939]: 2025-11-23 10:11:59.335 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:11:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v686: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 67 KiB/s wr, 4 op/s Nov 23 05:12:00 localhost nova_compute[280939]: 2025-11-23 10:12:00.259 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:00 localhost nova_compute[280939]: 2025-11-23 10:12:00.310 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:00 localhost nova_compute[280939]: 2025-11-23 10:12:00.914 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v687: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 67 KiB/s wr, 4 op/s Nov 23 05:12:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 05:12:01 localhost ceph-osd[31569]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 22K writes, 88K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s#012Cumulative WAL: 22K writes, 7842 syncs, 2.89 writes per sync, written: 0.07 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 13K writes, 50K keys, 13K commit groups, 1.0 writes per commit group, ingest: 36.91 MB, 0.06 MB/s#012Interval WAL: 13K writes, 5435 syncs, 2.47 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 05:12:01 localhost ovn_controller[153771]: 2025-11-23T10:12:01Z|00374|ovn_bfd|INFO|Disabled BFD on interface ovn-c237ed-0 Nov 23 05:12:01 localhost ovn_controller[153771]: 2025-11-23T10:12:01Z|00375|ovn_bfd|INFO|Disabled BFD on interface ovn-49b8a0-0 Nov 23 05:12:01 localhost ovn_controller[153771]: 2025-11-23T10:12:01Z|00376|ovn_bfd|INFO|Disabled BFD on interface ovn-b7d5b3-0 Nov 23 05:12:01 localhost nova_compute[280939]: 2025-11-23 10:12:01.909 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:01 localhost nova_compute[280939]: 2025-11-23 10:12:01.912 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:01 localhost nova_compute[280939]: 2025-11-23 10:12:01.926 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:02 localhost dnsmasq[327936]: read /var/lib/neutron/dhcp/c387ee99-0478-4ba5-ad6b-f5b7e502389e/addn_hosts - 0 addresses Nov 23 05:12:02 localhost podman[328033]: 2025-11-23 10:12:02.039457353 +0000 UTC m=+0.056383504 container kill ca17e6c4af423cea02c98190eaf77a2ff64b31afb0194089e51010223be2a81f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c387ee99-0478-4ba5-ad6b-f5b7e502389e, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 23 05:12:02 localhost dnsmasq-dhcp[327936]: read /var/lib/neutron/dhcp/c387ee99-0478-4ba5-ad6b-f5b7e502389e/host Nov 23 05:12:02 localhost dnsmasq-dhcp[327936]: read /var/lib/neutron/dhcp/c387ee99-0478-4ba5-ad6b-f5b7e502389e/opts Nov 23 05:12:02 localhost nova_compute[280939]: 2025-11-23 10:12:02.201 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:02 localhost ovn_controller[153771]: 2025-11-23T10:12:02Z|00377|binding|INFO|Releasing lport 63a77f2c-221e-4f0c-b597-18930ae41adc from this chassis (sb_readonly=0) Nov 23 05:12:02 localhost ovn_controller[153771]: 2025-11-23T10:12:02Z|00378|binding|INFO|Setting lport 63a77f2c-221e-4f0c-b597-18930ae41adc down in Southbound Nov 23 05:12:02 localhost kernel: device tap63a77f2c-22 left promiscuous mode Nov 23 05:12:02 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:02.216 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-c387ee99-0478-4ba5-ad6b-f5b7e502389e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c387ee99-0478-4ba5-ad6b-f5b7e502389e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4df21a31ac7e4292b0bbda8819ee47c0', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cd856a6-134f-4507-98ec-2915cf9fe691, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=63a77f2c-221e-4f0c-b597-18930ae41adc) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:12:02 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:02.218 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 63a77f2c-221e-4f0c-b597-18930ae41adc in datapath c387ee99-0478-4ba5-ad6b-f5b7e502389e unbound from our chassis#033[00m Nov 23 05:12:02 localhost nova_compute[280939]: 2025-11-23 10:12:02.220 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:02 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:02.220 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c387ee99-0478-4ba5-ad6b-f5b7e502389e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:12:02 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:02.222 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac4b5b0-5253-4720-9ea0-e116ea1694af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:12:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:12:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e272 do_prune osdmap full prune enabled Nov 23 05:12:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 e273: 6 total, 6 up, 6 in Nov 23 05:12:03 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : osdmap e273: 6 total, 6 up, 6 in Nov 23 05:12:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 39 KiB/s wr, 2 op/s Nov 23 05:12:03 localhost nova_compute[280939]: 2025-11-23 10:12:03.900 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:04 localhost dnsmasq[327936]: exiting on receipt of SIGTERM Nov 23 05:12:04 localhost podman[328073]: 2025-11-23 10:12:04.384402203 +0000 UTC m=+0.045184590 container kill ca17e6c4af423cea02c98190eaf77a2ff64b31afb0194089e51010223be2a81f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c387ee99-0478-4ba5-ad6b-f5b7e502389e, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:12:04 localhost systemd[1]: libpod-ca17e6c4af423cea02c98190eaf77a2ff64b31afb0194089e51010223be2a81f.scope: Deactivated successfully. Nov 23 05:12:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:12:04 localhost podman[328087]: 2025-11-23 10:12:04.443803512 +0000 UTC m=+0.049383243 container died ca17e6c4af423cea02c98190eaf77a2ff64b31afb0194089e51010223be2a81f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c387ee99-0478-4ba5-ad6b-f5b7e502389e, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:12:04 localhost systemd[1]: tmp-crun.KKrwcc.mount: Deactivated successfully. Nov 23 05:12:04 localhost podman[328087]: 2025-11-23 10:12:04.496515748 +0000 UTC m=+0.102095469 container cleanup ca17e6c4af423cea02c98190eaf77a2ff64b31afb0194089e51010223be2a81f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c387ee99-0478-4ba5-ad6b-f5b7e502389e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:12:04 localhost systemd[1]: libpod-conmon-ca17e6c4af423cea02c98190eaf77a2ff64b31afb0194089e51010223be2a81f.scope: Deactivated successfully. Nov 23 05:12:04 localhost podman[328100]: 2025-11-23 10:12:04.540019914 +0000 UTC m=+0.125872461 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Nov 23 05:12:04 localhost podman[328100]: 2025-11-23 10:12:04.57437093 +0000 UTC m=+0.160223467 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2) Nov 23 05:12:04 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:12:04 localhost podman[328089]: 2025-11-23 10:12:04.625363373 +0000 UTC m=+0.224419448 container remove ca17e6c4af423cea02c98190eaf77a2ff64b31afb0194089e51010223be2a81f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c387ee99-0478-4ba5-ad6b-f5b7e502389e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:12:04 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:12:04.650 262301 INFO neutron.agent.dhcp.agent [None req-55b6e3fb-3527-40af-aa11-0bbea3b8db81 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:12:04 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:12:04.677 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:12:04 localhost nova_compute[280939]: 2025-11-23 10:12:04.877 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:05 localhost nova_compute[280939]: 2025-11-23 10:12:05.277 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:05 localhost systemd[1]: var-lib-containers-storage-overlay-3e8b46b75b99b1ebedfdf9078adb6f4689462ca819a45fefe13a233ad488e261-merged.mount: Deactivated successfully. Nov 23 05:12:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ca17e6c4af423cea02c98190eaf77a2ff64b31afb0194089e51010223be2a81f-userdata-shm.mount: Deactivated successfully. Nov 23 05:12:05 localhost systemd[1]: run-netns-qdhcp\x2dc387ee99\x2d0478\x2d4ba5\x2dad6b\x2df5b7e502389e.mount: Deactivated successfully. Nov 23 05:12:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 49 KiB/s wr, 3 op/s Nov 23 05:12:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 23 05:12:06 localhost ceph-osd[32534]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 23K writes, 88K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s#012Cumulative WAL: 23K writes, 8093 syncs, 2.93 writes per sync, written: 0.07 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 15K writes, 53K keys, 15K commit groups, 1.0 writes per commit group, ingest: 44.83 MB, 0.07 MB/s#012Interval WAL: 15K writes, 6049 syncs, 2.54 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 23 05:12:06 localhost openstack_network_exporter[241732]: ERROR 10:12:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:12:06 localhost openstack_network_exporter[241732]: ERROR 10:12:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:12:06 localhost openstack_network_exporter[241732]: Nov 23 05:12:06 localhost openstack_network_exporter[241732]: ERROR 10:12:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:12:06 localhost openstack_network_exporter[241732]: ERROR 10:12:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:12:06 localhost openstack_network_exporter[241732]: ERROR 10:12:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:12:06 localhost openstack_network_exporter[241732]: Nov 23 05:12:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v691: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 462 B/s rd, 44 KiB/s wr, 3 op/s Nov 23 05:12:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0. Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:08.563468) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73 Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892728563526, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 993, "num_deletes": 259, "total_data_size": 1340477, "memory_usage": 1370752, "flush_reason": "Manual Compaction"} Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892728573816, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 1323910, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40126, "largest_seqno": 41118, "table_properties": {"data_size": 1319327, "index_size": 2118, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11039, "raw_average_key_size": 20, "raw_value_size": 1309602, "raw_average_value_size": 2385, "num_data_blocks": 93, "num_entries": 549, "num_filter_entries": 549, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892673, "oldest_key_time": 1763892673, "file_creation_time": 1763892728, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}} Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 10390 microseconds, and 5155 cpu microseconds. Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:08.573862) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 1323910 bytes OK Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:08.573884) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:08.575834) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:08.575855) EVENT_LOG_v1 {"time_micros": 1763892728575849, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:08.575876) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 1335607, prev total WAL file size 1335931, number of live WAL files 2. Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:08.576626) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353239' seq:72057594037927935, type:22 .. '6C6F676D0034373831' seq:0, type:0; will stop at (end) Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(1292KB)], [72(16MB)] Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892728576670, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 19063725, "oldest_snapshot_seqno": -1} Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 14518 keys, 18927014 bytes, temperature: kUnknown Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892728656695, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 18927014, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18843340, "index_size": 46261, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36357, "raw_key_size": 390690, "raw_average_key_size": 26, "raw_value_size": 18595839, "raw_average_value_size": 1280, "num_data_blocks": 1718, "num_entries": 14518, "num_filter_entries": 14518, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763892728, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}} Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:08.657016) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 18927014 bytes Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:08.658849) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 238.0 rd, 236.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 16.9 +0.0 blob) out(18.1 +0.0 blob), read-write-amplify(28.7) write-amplify(14.3) OK, records in: 15058, records dropped: 540 output_compression: NoCompression Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:08.658878) EVENT_LOG_v1 {"time_micros": 1763892728658866, "job": 44, "event": "compaction_finished", "compaction_time_micros": 80100, "compaction_time_cpu_micros": 50558, "output_level": 6, "num_output_files": 1, "total_output_size": 18927014, "num_input_records": 15058, "num_output_records": 14518, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892728659224, "job": 44, "event": "table_file_deletion", "file_number": 74} Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892728661586, "job": 44, "event": "table_file_deletion", "file_number": 72} Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:08.576477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:08.661699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:08.661705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:08.661708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:08.661711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:12:08 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:08.661714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:12:08 localhost nova_compute[280939]: 2025-11-23 10:12:08.931 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 7.7 KiB/s wr, 0 op/s Nov 23 05:12:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:09.753 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:12:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:09.753 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:12:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:09.753 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:12:10 localhost nova_compute[280939]: 2025-11-23 10:12:10.294 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 7.7 KiB/s wr, 0 op/s Nov 23 05:12:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:12:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:12:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:12:11 localhost podman[328136]: 2025-11-23 10:12:11.890960202 +0000 UTC m=+0.080736505 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:12:12 localhost podman[328136]: 2025-11-23 10:12:12.38109352 +0000 UTC m=+0.570869763 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2) Nov 23 05:12:12 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:12:12 localhost systemd[1]: tmp-crun.Kj1rFF.mount: Deactivated successfully. Nov 23 05:12:12 localhost podman[328137]: 2025-11-23 10:12:12.453447638 +0000 UTC m=+0.638834191 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 23 05:12:12 localhost podman[328137]: 2025-11-23 10:12:12.467466752 +0000 UTC m=+0.652853295 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 23 05:12:12 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:12:12 localhost podman[328138]: 2025-11-23 10:12:12.528024936 +0000 UTC m=+0.709214547 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller) Nov 23 05:12:12 localhost podman[328138]: 2025-11-23 10:12:12.568389923 +0000 UTC m=+0.749579514 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Nov 23 05:12:12 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:12:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:12:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 7.6 KiB/s wr, 0 op/s Nov 23 05:12:13 localhost nova_compute[280939]: 2025-11-23 10:12:13.948 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:14 localhost nova_compute[280939]: 2025-11-23 10:12:14.550 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:14 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:14.550 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:12:14 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:14.552 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 23 05:12:15 localhost nova_compute[280939]: 2025-11-23 10:12:15.350 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 6.4 KiB/s wr, 0 op/s Nov 23 05:12:16 localhost nova_compute[280939]: 2025-11-23 10:12:16.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:12:17 localhost podman[239764]: time="2025-11-23T10:12:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:12:17 localhost podman[239764]: @ - - [23/Nov/2025:10:12:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:12:17 localhost nova_compute[280939]: 2025-11-23 10:12:17.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:12:17 localhost podman[239764]: @ - - [23/Nov/2025:10:12:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18778 "" "Go-http-client/1.1" Nov 23 05:12:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v696: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:12:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:12:18 localhost podman[328201]: 2025-11-23 10:12:18.900150761 +0000 UTC m=+0.079087071 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, architecture=x86_64, version=9.6, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public) Nov 23 05:12:18 localhost podman[328201]: 2025-11-23 10:12:18.916323493 +0000 UTC m=+0.095259773 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.7, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Nov 23 05:12:18 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:12:18 localhost nova_compute[280939]: 2025-11-23 10:12:18.989 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:20 localhost nova_compute[280939]: 2025-11-23 10:12:20.380 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:21 localhost nova_compute[280939]: 2025-11-23 10:12:21.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:12:21 localhost nova_compute[280939]: 2025-11-23 10:12:21.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 05:12:21 localhost nova_compute[280939]: 2025-11-23 10:12:21.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 05:12:21 localhost nova_compute[280939]: 2025-11-23 10:12:21.152 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 05:12:21 localhost nova_compute[280939]: 2025-11-23 10:12:21.153 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:12:21 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:21.554 159415 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ade391ff-62a6-48e9-b6e8-1a8b190070d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 23 05:12:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v698: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:22 localhost sshd[328221]: main: sshd: ssh-rsa algorithm is disabled Nov 23 05:12:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:12:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:12:22 localhost podman[328223]: 2025-11-23 10:12:22.892902637 +0000 UTC m=+0.078752681 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 23 05:12:22 localhost podman[328223]: 2025-11-23 10:12:22.905527797 +0000 UTC m=+0.091377811 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Nov 23 05:12:22 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:12:22 localhost podman[328224]: 2025-11-23 10:12:22.95433429 +0000 UTC m=+0.136881090 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:12:22 localhost podman[328224]: 2025-11-23 10:12:22.964287685 +0000 UTC m=+0.146834485 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 23 05:12:22 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:12:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_10:12:23 Nov 23 05:12:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 05:12:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 05:12:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['vms', 'manila_data', 'manila_metadata', '.mgr', 'images', 'volumes', 'backups'] Nov 23 05:12:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 05:12:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:12:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:12:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:12:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:12:23 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:12:23.488 262301 INFO neutron.agent.linux.ip_lib [None req-c6a08cd2-3ea8-4cf1-bcb9-9087c7d6cffb - - - - - -] Device tap891967a5-20 cannot be used as it has no MAC address#033[00m Nov 23 05:12:23 localhost nova_compute[280939]: 2025-11-23 10:12:23.552 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:23 localhost kernel: device tap891967a5-20 entered promiscuous mode Nov 23 05:12:23 localhost NetworkManager[5966]: [1763892743.5626] manager: (tap891967a5-20): new Generic device (/org/freedesktop/NetworkManager/Devices/60) Nov 23 05:12:23 localhost ovn_controller[153771]: 2025-11-23T10:12:23Z|00379|binding|INFO|Claiming lport 891967a5-20d3-4e10-b678-ad39f50a4407 for this chassis. Nov 23 05:12:23 localhost nova_compute[280939]: 2025-11-23 10:12:23.565 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:23 localhost ovn_controller[153771]: 2025-11-23T10:12:23Z|00380|binding|INFO|891967a5-20d3-4e10-b678-ad39f50a4407: Claiming unknown Nov 23 05:12:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:12:23 localhost systemd-udevd[328274]: Network interface NamePolicy= disabled on kernel command line. Nov 23 05:12:23 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:23.577 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-1fc3b5dd-aaf3-42bc-abef-93719b14bafb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1fc3b5dd-aaf3-42bc-abef-93719b14bafb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d411df0ca9e478984a523820800a77f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae3d8e36-f184-4e6f-8a4c-6f3a38a40bd7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=891967a5-20d3-4e10-b678-ad39f50a4407) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:12:23 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:23.579 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 891967a5-20d3-4e10-b678-ad39f50a4407 in datapath 1fc3b5dd-aaf3-42bc-abef-93719b14bafb bound to our chassis#033[00m Nov 23 05:12:23 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:23.582 159415 DEBUG neutron.agent.ovn.metadata.agent [-] Port d3307b45-8e66-427e-83a2-e8f3225f3627 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 23 05:12:23 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:23.582 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1fc3b5dd-aaf3-42bc-abef-93719b14bafb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:12:23 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:23.583 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[049bd9e0-c8ba-423c-97b7-c0c16cfc4c95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:12:23 localhost journal[229336]: ethtool ioctl error on tap891967a5-20: No such device Nov 23 05:12:23 localhost nova_compute[280939]: 2025-11-23 10:12:23.596 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:23 localhost ovn_controller[153771]: 2025-11-23T10:12:23Z|00381|binding|INFO|Setting lport 891967a5-20d3-4e10-b678-ad39f50a4407 ovn-installed in OVS Nov 23 05:12:23 localhost ovn_controller[153771]: 2025-11-23T10:12:23Z|00382|binding|INFO|Setting lport 891967a5-20d3-4e10-b678-ad39f50a4407 up in Southbound Nov 23 05:12:23 localhost journal[229336]: ethtool ioctl error on tap891967a5-20: No such device Nov 23 05:12:23 localhost nova_compute[280939]: 2025-11-23 10:12:23.602 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:23 localhost nova_compute[280939]: 2025-11-23 10:12:23.606 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:23 localhost journal[229336]: ethtool ioctl error on tap891967a5-20: No such device Nov 23 05:12:23 localhost journal[229336]: ethtool ioctl error on tap891967a5-20: No such device Nov 23 05:12:23 localhost journal[229336]: ethtool ioctl error on tap891967a5-20: No such device Nov 23 05:12:23 localhost journal[229336]: ethtool ioctl error on tap891967a5-20: No such device Nov 23 05:12:23 localhost journal[229336]: ethtool ioctl error on tap891967a5-20: No such device Nov 23 05:12:23 localhost journal[229336]: ethtool ioctl error on tap891967a5-20: No such device Nov 23 05:12:23 localhost nova_compute[280939]: 2025-11-23 10:12:23.643 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:23 localhost nova_compute[280939]: 2025-11-23 10:12:23.670 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v699: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 05:12:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:12:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 05:12:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:12:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Nov 23 05:12:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:12:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Nov 23 05:12:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:12:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 23 05:12:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:12:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 23 05:12:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:12:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 05:12:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:12:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0026104371684812955 of space, bias 4.0, pg target 2.077907986111111 quantized to 16 (current 16) Nov 23 05:12:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 05:12:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:12:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 05:12:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:12:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:12:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:12:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:12:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:12:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:12:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:12:23 localhost nova_compute[280939]: 2025-11-23 10:12:23.992 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:24 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:12:24 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:12:24 localhost podman[328345]: Nov 23 05:12:24 localhost podman[328345]: 2025-11-23 10:12:24.822690941 +0000 UTC m=+0.087856139 container create b66c2d64549b5d4e1dff44b04574d9e1bbd8cb50dd86425ecb815fc45a619ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fc3b5dd-aaf3-42bc-abef-93719b14bafb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:12:24 localhost systemd[1]: Started libpod-conmon-b66c2d64549b5d4e1dff44b04574d9e1bbd8cb50dd86425ecb815fc45a619ae3.scope. Nov 23 05:12:24 localhost podman[328345]: 2025-11-23 10:12:24.779199356 +0000 UTC m=+0.044364584 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 23 05:12:24 localhost systemd[1]: Started libcrun container. Nov 23 05:12:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff47bfdf1fcc276ebe24714009926fc7f9a9a734444ffa54f4be490e9579e5a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 23 05:12:24 localhost podman[328345]: 2025-11-23 10:12:24.901543084 +0000 UTC m=+0.166708292 container init b66c2d64549b5d4e1dff44b04574d9e1bbd8cb50dd86425ecb815fc45a619ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fc3b5dd-aaf3-42bc-abef-93719b14bafb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:12:24 localhost podman[328345]: 2025-11-23 10:12:24.911471818 +0000 UTC m=+0.176637036 container start b66c2d64549b5d4e1dff44b04574d9e1bbd8cb50dd86425ecb815fc45a619ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fc3b5dd-aaf3-42bc-abef-93719b14bafb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS) Nov 23 05:12:24 localhost dnsmasq[328363]: started, version 2.85 cachesize 150 Nov 23 05:12:24 localhost dnsmasq[328363]: DNS service limited to local subnets Nov 23 05:12:24 localhost dnsmasq[328363]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 23 05:12:24 localhost dnsmasq[328363]: warning: no upstream servers configured Nov 23 05:12:24 localhost dnsmasq-dhcp[328363]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 23 05:12:24 localhost dnsmasq[328363]: read /var/lib/neutron/dhcp/1fc3b5dd-aaf3-42bc-abef-93719b14bafb/addn_hosts - 0 addresses Nov 23 05:12:24 localhost dnsmasq-dhcp[328363]: read /var/lib/neutron/dhcp/1fc3b5dd-aaf3-42bc-abef-93719b14bafb/host Nov 23 05:12:24 localhost dnsmasq-dhcp[328363]: read /var/lib/neutron/dhcp/1fc3b5dd-aaf3-42bc-abef-93719b14bafb/opts Nov 23 05:12:25 localhost nova_compute[280939]: 2025-11-23 10:12:25.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:12:25 localhost nova_compute[280939]: 2025-11-23 10:12:25.415 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:25 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:12:25.440 262301 INFO neutron.agent.dhcp.agent [None req-131c8bc2-49e8-4879-a2c9-5e6e1c0b8e56 - - - - - -] DHCP configuration for ports {'c09bc39d-652f-49e7-ad0f-1cd2ee0ca030'} is completed#033[00m Nov 23 05:12:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:26 localhost nova_compute[280939]: 2025-11-23 10:12:26.129 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:12:27 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:12:27.254 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:12:26Z, description=, device_id=b16795fb-9d48-4b58-8265-f41a923b13e6, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b718d053-1fbd-4e75-8842-7b045f239394, ip_allocation=immediate, mac_address=fa:16:3e:c3:d6:75, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:12:21Z, description=, dns_domain=, id=1fc3b5dd-aaf3-42bc-abef-93719b14bafb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-1665202972-network, port_security_enabled=True, project_id=1d411df0ca9e478984a523820800a77f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=4383, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3950, status=ACTIVE, subnets=['8dea5fdc-e39e-484c-8198-ab400e232a6c'], tags=[], tenant_id=1d411df0ca9e478984a523820800a77f, updated_at=2025-11-23T10:12:22Z, vlan_transparent=None, network_id=1fc3b5dd-aaf3-42bc-abef-93719b14bafb, port_security_enabled=False, project_id=1d411df0ca9e478984a523820800a77f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3957, status=DOWN, tags=[], tenant_id=1d411df0ca9e478984a523820800a77f, updated_at=2025-11-23T10:12:27Z on network 1fc3b5dd-aaf3-42bc-abef-93719b14bafb#033[00m Nov 23 05:12:27 localhost dnsmasq[328363]: read /var/lib/neutron/dhcp/1fc3b5dd-aaf3-42bc-abef-93719b14bafb/addn_hosts - 1 addresses Nov 23 05:12:27 localhost dnsmasq-dhcp[328363]: read /var/lib/neutron/dhcp/1fc3b5dd-aaf3-42bc-abef-93719b14bafb/host Nov 23 05:12:27 localhost dnsmasq-dhcp[328363]: read /var/lib/neutron/dhcp/1fc3b5dd-aaf3-42bc-abef-93719b14bafb/opts Nov 23 05:12:27 localhost podman[328379]: 2025-11-23 10:12:27.474097052 +0000 UTC m=+0.063807749 container kill b66c2d64549b5d4e1dff44b04574d9e1bbd8cb50dd86425ecb815fc45a619ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fc3b5dd-aaf3-42bc-abef-93719b14bafb, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Nov 23 05:12:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v701: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:27 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:12:27.716 262301 INFO neutron.agent.dhcp.agent [None req-8ffed1a2-da4f-4c2a-8af2-06fa099bc37d - - - - - -] DHCP configuration for ports {'b718d053-1fbd-4e75-8842-7b045f239394'} is completed#033[00m Nov 23 05:12:28 localhost nova_compute[280939]: 2025-11-23 10:12:28.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:12:28 localhost nova_compute[280939]: 2025-11-23 10:12:28.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 05:12:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:12:29 localhost nova_compute[280939]: 2025-11-23 10:12:29.027 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:29 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:12:29.726 262301 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:12:26Z, description=, device_id=b16795fb-9d48-4b58-8265-f41a923b13e6, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b718d053-1fbd-4e75-8842-7b045f239394, ip_allocation=immediate, mac_address=fa:16:3e:c3:d6:75, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:12:21Z, description=, dns_domain=, id=1fc3b5dd-aaf3-42bc-abef-93719b14bafb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-1665202972-network, port_security_enabled=True, project_id=1d411df0ca9e478984a523820800a77f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=4383, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3950, status=ACTIVE, subnets=['8dea5fdc-e39e-484c-8198-ab400e232a6c'], tags=[], tenant_id=1d411df0ca9e478984a523820800a77f, updated_at=2025-11-23T10:12:22Z, vlan_transparent=None, network_id=1fc3b5dd-aaf3-42bc-abef-93719b14bafb, port_security_enabled=False, project_id=1d411df0ca9e478984a523820800a77f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3957, status=DOWN, tags=[], tenant_id=1d411df0ca9e478984a523820800a77f, updated_at=2025-11-23T10:12:27Z on network 1fc3b5dd-aaf3-42bc-abef-93719b14bafb#033[00m Nov 23 05:12:29 localhost dnsmasq[328363]: read /var/lib/neutron/dhcp/1fc3b5dd-aaf3-42bc-abef-93719b14bafb/addn_hosts - 1 addresses Nov 23 05:12:29 localhost dnsmasq-dhcp[328363]: read /var/lib/neutron/dhcp/1fc3b5dd-aaf3-42bc-abef-93719b14bafb/host Nov 23 05:12:29 localhost podman[328416]: 2025-11-23 10:12:29.941286057 +0000 UTC m=+0.058797291 container kill b66c2d64549b5d4e1dff44b04574d9e1bbd8cb50dd86425ecb815fc45a619ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fc3b5dd-aaf3-42bc-abef-93719b14bafb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:12:29 localhost dnsmasq-dhcp[328363]: read /var/lib/neutron/dhcp/1fc3b5dd-aaf3-42bc-abef-93719b14bafb/opts Nov 23 05:12:29 localhost systemd[1]: tmp-crun.F46ZQH.mount: Deactivated successfully. Nov 23 05:12:30 localhost nova_compute[280939]: 2025-11-23 10:12:30.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:12:30 localhost nova_compute[280939]: 2025-11-23 10:12:30.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:12:30 localhost nova_compute[280939]: 2025-11-23 10:12:30.155 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:12:30 localhost nova_compute[280939]: 2025-11-23 10:12:30.156 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:12:30 localhost nova_compute[280939]: 2025-11-23 10:12:30.156 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:12:30 localhost nova_compute[280939]: 2025-11-23 10:12:30.157 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 05:12:30 localhost nova_compute[280939]: 2025-11-23 10:12:30.157 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:12:30 localhost nova_compute[280939]: 2025-11-23 10:12:30.447 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:30 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:12:30.537 262301 INFO neutron.agent.dhcp.agent [None req-3661d253-1b56-4a75-8e4a-ab58aca27d40 - - - - - -] DHCP configuration for ports {'b718d053-1fbd-4e75-8842-7b045f239394'} is completed#033[00m Nov 23 05:12:30 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:12:30 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3327313948' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:12:30 localhost nova_compute[280939]: 2025-11-23 10:12:30.639 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:12:30 localhost nova_compute[280939]: 2025-11-23 10:12:30.852 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 05:12:30 localhost nova_compute[280939]: 2025-11-23 10:12:30.854 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11382MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 05:12:30 localhost nova_compute[280939]: 2025-11-23 10:12:30.854 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:12:30 localhost nova_compute[280939]: 2025-11-23 10:12:30.855 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:12:30 localhost nova_compute[280939]: 2025-11-23 10:12:30.921 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 05:12:30 localhost nova_compute[280939]: 2025-11-23 10:12:30.922 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 05:12:30 localhost nova_compute[280939]: 2025-11-23 10:12:30.950 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:12:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:12:31 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3424500751' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:12:31 localhost nova_compute[280939]: 2025-11-23 10:12:31.372 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:12:31 localhost nova_compute[280939]: 2025-11-23 10:12:31.380 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 05:12:31 localhost nova_compute[280939]: 2025-11-23 10:12:31.396 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 05:12:31 localhost nova_compute[280939]: 2025-11-23 10:12:31.399 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 05:12:31 localhost nova_compute[280939]: 2025-11-23 10:12:31.400 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:12:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:32 localhost ovn_controller[153771]: 2025-11-23T10:12:32Z|00383|ovn_bfd|INFO|Enabled BFD on interface ovn-c237ed-0 Nov 23 05:12:32 localhost ovn_controller[153771]: 2025-11-23T10:12:32Z|00384|ovn_bfd|INFO|Enabled BFD on interface ovn-49b8a0-0 Nov 23 05:12:32 localhost ovn_controller[153771]: 2025-11-23T10:12:32Z|00385|ovn_bfd|INFO|Enabled BFD on interface ovn-b7d5b3-0 Nov 23 05:12:32 localhost nova_compute[280939]: 2025-11-23 10:12:32.519 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:32 localhost nova_compute[280939]: 2025-11-23 10:12:32.522 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:32 localhost nova_compute[280939]: 2025-11-23 10:12:32.525 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:32 localhost nova_compute[280939]: 2025-11-23 10:12:32.527 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:32 localhost nova_compute[280939]: 2025-11-23 10:12:32.541 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:32 localhost nova_compute[280939]: 2025-11-23 10:12:32.579 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:33 localhost nova_compute[280939]: 2025-11-23 10:12:33.500 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:33 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:12:33 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v704: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:34 localhost nova_compute[280939]: 2025-11-23 10:12:34.066 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:34 localhost nova_compute[280939]: 2025-11-23 10:12:34.187 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:34 localhost nova_compute[280939]: 2025-11-23 10:12:34.280 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:12:34 localhost podman[328482]: 2025-11-23 10:12:34.900435063 +0000 UTC m=+0.088013184 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:12:34 localhost podman[328482]: 2025-11-23 10:12:34.906284848 +0000 UTC m=+0.093862939 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Nov 23 05:12:34 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:12:35 localhost nova_compute[280939]: 2025-11-23 10:12:35.476 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:35 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v705: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:36 localhost openstack_network_exporter[241732]: ERROR 10:12:36 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:12:36 localhost openstack_network_exporter[241732]: ERROR 10:12:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:12:36 localhost openstack_network_exporter[241732]: ERROR 10:12:36 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:12:36 localhost openstack_network_exporter[241732]: ERROR 10:12:36 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:12:36 localhost openstack_network_exporter[241732]: Nov 23 05:12:36 localhost openstack_network_exporter[241732]: ERROR 10:12:36 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:12:36 localhost openstack_network_exporter[241732]: Nov 23 05:12:37 localhost sshd[328501]: main: sshd: ssh-rsa algorithm is disabled Nov 23 05:12:37 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v706: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:38 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:12:39 localhost nova_compute[280939]: 2025-11-23 10:12:39.119 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:39 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v707: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:40 localhost nova_compute[280939]: 2025-11-23 10:12:40.503 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:41 localhost ovn_controller[153771]: 2025-11-23T10:12:41Z|00386|ovn_bfd|INFO|Disabled BFD on interface ovn-c237ed-0 Nov 23 05:12:41 localhost ovn_controller[153771]: 2025-11-23T10:12:41Z|00387|ovn_bfd|INFO|Disabled BFD on interface ovn-49b8a0-0 Nov 23 05:12:41 localhost ovn_controller[153771]: 2025-11-23T10:12:41Z|00388|ovn_bfd|INFO|Disabled BFD on interface ovn-b7d5b3-0 Nov 23 05:12:41 localhost nova_compute[280939]: 2025-11-23 10:12:41.614 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:41 localhost nova_compute[280939]: 2025-11-23 10:12:41.630 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:41 localhost nova_compute[280939]: 2025-11-23 10:12:41.635 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:41 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:41 localhost dnsmasq[328363]: read /var/lib/neutron/dhcp/1fc3b5dd-aaf3-42bc-abef-93719b14bafb/addn_hosts - 0 addresses Nov 23 05:12:41 localhost podman[328521]: 2025-11-23 10:12:41.704506607 +0000 UTC m=+0.053390229 container kill b66c2d64549b5d4e1dff44b04574d9e1bbd8cb50dd86425ecb815fc45a619ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fc3b5dd-aaf3-42bc-abef-93719b14bafb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 23 05:12:41 localhost dnsmasq-dhcp[328363]: read /var/lib/neutron/dhcp/1fc3b5dd-aaf3-42bc-abef-93719b14bafb/host Nov 23 05:12:41 localhost dnsmasq-dhcp[328363]: read /var/lib/neutron/dhcp/1fc3b5dd-aaf3-42bc-abef-93719b14bafb/opts Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0. Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:41.766403) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76 Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892761766443, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 576, "num_deletes": 251, "total_data_size": 336531, "memory_usage": 347592, "flush_reason": "Manual Compaction"} Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892761770937, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 328382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41119, "largest_seqno": 41694, "table_properties": {"data_size": 325483, "index_size": 882, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7217, "raw_average_key_size": 19, "raw_value_size": 319655, "raw_average_value_size": 875, "num_data_blocks": 39, "num_entries": 365, "num_filter_entries": 365, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892728, "oldest_key_time": 1763892728, "file_creation_time": 1763892761, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}} Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 4567 microseconds, and 1263 cpu microseconds. Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:41.770970) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 328382 bytes OK Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:41.771005) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:41.772939) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:41.772954) EVENT_LOG_v1 {"time_micros": 1763892761772949, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:41.772971) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 333364, prev total WAL file size 333364, number of live WAL files 2. Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:41.773570) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133333033' seq:72057594037927935, type:22 .. '7061786F73003133353535' seq:0, type:0; will stop at (end) Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(320KB)], [75(18MB)] Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892761773602, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 19255396, "oldest_snapshot_seqno": -1} Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 14368 keys, 17821017 bytes, temperature: kUnknown Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892761836927, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 17821017, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17740081, "index_size": 43907, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35973, "raw_key_size": 388039, "raw_average_key_size": 27, "raw_value_size": 17496869, "raw_average_value_size": 1217, "num_data_blocks": 1615, "num_entries": 14368, "num_filter_entries": 14368, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891412, "oldest_key_time": 0, "file_creation_time": 1763892761, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9a7d578c-21aa-41c0-97ef-37d912c42473", "db_session_id": "G7XPCJTAARWJ01GM2KVT", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}} Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:41.839257) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 17821017 bytes Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:41.841128) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 296.7 rd, 274.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 18.1 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(112.9) write-amplify(54.3) OK, records in: 14883, records dropped: 515 output_compression: NoCompression Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:41.841148) EVENT_LOG_v1 {"time_micros": 1763892761841139, "job": 46, "event": "compaction_finished", "compaction_time_micros": 64892, "compaction_time_cpu_micros": 34389, "output_level": 6, "num_output_files": 1, "total_output_size": 17821017, "num_input_records": 14883, "num_output_records": 14368, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892761841540, "job": 46, "event": "table_file_deletion", "file_number": 77} Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532584/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892761843414, "job": 46, "event": "table_file_deletion", "file_number": 75} Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:41.773513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:41.843561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:41.843571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:41.843574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:41.843577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:12:41 localhost ceph-mon[293353]: rocksdb: (Original Log Time 2025/11/23-10:12:41.843580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 23 05:12:41 localhost kernel: device tap891967a5-20 left promiscuous mode Nov 23 05:12:41 localhost nova_compute[280939]: 2025-11-23 10:12:41.855 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:41 localhost ovn_controller[153771]: 2025-11-23T10:12:41Z|00389|binding|INFO|Releasing lport 891967a5-20d3-4e10-b678-ad39f50a4407 from this chassis (sb_readonly=0) Nov 23 05:12:41 localhost ovn_controller[153771]: 2025-11-23T10:12:41Z|00390|binding|INFO|Setting lport 891967a5-20d3-4e10-b678-ad39f50a4407 down in Southbound Nov 23 05:12:41 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:41.867 159415 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532584.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp63b781b0-3004-595d-8832-f4a7d48ee2a2-1fc3b5dd-aaf3-42bc-abef-93719b14bafb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1fc3b5dd-aaf3-42bc-abef-93719b14bafb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d411df0ca9e478984a523820800a77f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532584.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae3d8e36-f184-4e6f-8a4c-6f3a38a40bd7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=891967a5-20d3-4e10-b678-ad39f50a4407) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 23 05:12:41 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:41.868 159415 INFO neutron.agent.ovn.metadata.agent [-] Port 891967a5-20d3-4e10-b678-ad39f50a4407 in datapath 1fc3b5dd-aaf3-42bc-abef-93719b14bafb unbound from our chassis#033[00m Nov 23 05:12:41 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:41.870 159415 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1fc3b5dd-aaf3-42bc-abef-93719b14bafb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 23 05:12:41 localhost ovn_metadata_agent[159410]: 2025-11-23 10:12:41.870 308301 DEBUG oslo.privsep.daemon [-] privsep: reply[1f37f9ad-751c-4e2d-bc91-c7ac6e29279a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 23 05:12:41 localhost nova_compute[280939]: 2025-11-23 10:12:41.874 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:41 localhost nova_compute[280939]: 2025-11-23 10:12:41.875 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:12:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:12:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:12:42 localhost systemd[1]: tmp-crun.Was1mr.mount: Deactivated successfully. Nov 23 05:12:42 localhost podman[328543]: 2025-11-23 10:12:42.905440632 +0000 UTC m=+0.093763255 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3) Nov 23 05:12:42 localhost podman[328543]: 2025-11-23 10:12:42.94204117 +0000 UTC m=+0.130363763 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:12:42 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:12:42 localhost podman[328544]: 2025-11-23 10:12:42.946637745 +0000 UTC m=+0.132564763 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:12:43 localhost podman[328544]: 2025-11-23 10:12:43.053433152 +0000 UTC m=+0.239360100 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:12:43 localhost podman[328545]: 2025-11-23 10:12:43.062209339 +0000 UTC m=+0.242874722 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true) Nov 23 05:12:43 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:12:43 localhost podman[328545]: 2025-11-23 10:12:43.101336527 +0000 UTC m=+0.282001899 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0) Nov 23 05:12:43 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:12:43 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:12:43 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v709: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:43 localhost systemd[1]: tmp-crun.e6Dkp7.mount: Deactivated successfully. Nov 23 05:12:44 localhost nova_compute[280939]: 2025-11-23 10:12:44.147 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:44 localhost dnsmasq[328363]: exiting on receipt of SIGTERM Nov 23 05:12:44 localhost systemd[1]: libpod-b66c2d64549b5d4e1dff44b04574d9e1bbd8cb50dd86425ecb815fc45a619ae3.scope: Deactivated successfully. Nov 23 05:12:44 localhost podman[328627]: 2025-11-23 10:12:44.363240189 +0000 UTC m=+0.057520169 container kill b66c2d64549b5d4e1dff44b04574d9e1bbd8cb50dd86425ecb815fc45a619ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fc3b5dd-aaf3-42bc-abef-93719b14bafb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 23 05:12:44 localhost podman[328640]: 2025-11-23 10:12:44.434932167 +0000 UTC m=+0.058650506 container died b66c2d64549b5d4e1dff44b04574d9e1bbd8cb50dd86425ecb815fc45a619ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fc3b5dd-aaf3-42bc-abef-93719b14bafb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:12:44 localhost systemd[1]: tmp-crun.CFpr3A.mount: Deactivated successfully. Nov 23 05:12:44 localhost podman[328640]: 2025-11-23 10:12:44.477261015 +0000 UTC m=+0.100979294 container cleanup b66c2d64549b5d4e1dff44b04574d9e1bbd8cb50dd86425ecb815fc45a619ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fc3b5dd-aaf3-42bc-abef-93719b14bafb, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Nov 23 05:12:44 localhost systemd[1]: libpod-conmon-b66c2d64549b5d4e1dff44b04574d9e1bbd8cb50dd86425ecb815fc45a619ae3.scope: Deactivated successfully. Nov 23 05:12:44 localhost podman[328642]: 2025-11-23 10:12:44.520534214 +0000 UTC m=+0.133651278 container remove b66c2d64549b5d4e1dff44b04574d9e1bbd8cb50dd86425ecb815fc45a619ae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fc3b5dd-aaf3-42bc-abef-93719b14bafb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118) Nov 23 05:12:44 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:12:44.542 262301 INFO neutron.agent.dhcp.agent [None req-2b126b4b-2b81-4e5f-bcfe-ee6da18b43d5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:12:44 localhost neutron_dhcp_agent[262297]: 2025-11-23 10:12:44.596 262301 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 23 05:12:44 localhost nova_compute[280939]: 2025-11-23 10:12:44.843 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:44 localhost systemd[1]: var-lib-containers-storage-overlay-ff47bfdf1fcc276ebe24714009926fc7f9a9a734444ffa54f4be490e9579e5a2-merged.mount: Deactivated successfully. Nov 23 05:12:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b66c2d64549b5d4e1dff44b04574d9e1bbd8cb50dd86425ecb815fc45a619ae3-userdata-shm.mount: Deactivated successfully. Nov 23 05:12:44 localhost systemd[1]: run-netns-qdhcp\x2d1fc3b5dd\x2daaf3\x2d42bc\x2dabef\x2d93719b14bafb.mount: Deactivated successfully. Nov 23 05:12:45 localhost nova_compute[280939]: 2025-11-23 10:12:45.532 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:45 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v710: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:47 localhost podman[239764]: time="2025-11-23T10:12:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:12:47 localhost podman[239764]: @ - - [23/Nov/2025:10:12:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:12:47 localhost podman[239764]: @ - - [23/Nov/2025:10:12:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18788 "" "Go-http-client/1.1" Nov 23 05:12:47 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v711: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:48 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:12:49 localhost nova_compute[280939]: 2025-11-23 10:12:49.193 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:49 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v712: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:12:49 localhost podman[328671]: 2025-11-23 10:12:49.894542367 +0000 UTC m=+0.078481812 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7) Nov 23 05:12:49 localhost podman[328671]: 2025-11-23 10:12:49.934444399 +0000 UTC m=+0.118383844 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 23 05:12:49 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:12:50 localhost nova_compute[280939]: 2025-11-23 10:12:50.557 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:51 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v713: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:12:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:12:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:12:53 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:12:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:12:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:12:53 localhost systemd[1]: tmp-crun.28jWLc.mount: Deactivated successfully. Nov 23 05:12:53 localhost podman[328709]: 2025-11-23 10:12:53.487224691 +0000 UTC m=+0.089025725 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 23 05:12:53 localhost podman[328708]: 2025-11-23 10:12:53.537242383 +0000 UTC m=+0.140540805 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Nov 23 05:12:53 localhost podman[328708]: 2025-11-23 10:12:53.545666729 +0000 UTC m=+0.148965171 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:12:53 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:12:53 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:12:53 localhost podman[328709]: 2025-11-23 10:12:53.602481306 +0000 UTC m=+0.204282340 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:12:53 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:12:53 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v714: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 23 05:12:54 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 23 05:12:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 23 05:12:54 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:12:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 23 05:12:54 localhost nova_compute[280939]: 2025-11-23 10:12:54.231 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:54 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:12:54 localhost ceph-mgr[286671]: [progress INFO root] update: starting ev e9944309-c671-4cec-8202-3dcd8429c5a2 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:12:54 localhost ceph-mgr[286671]: [progress INFO root] complete: finished ev e9944309-c671-4cec-8202-3dcd8429c5a2 (Updating node-proxy deployment (+3 -> 3)) Nov 23 05:12:54 localhost ceph-mgr[286671]: [progress INFO root] Completed event e9944309-c671-4cec-8202-3dcd8429c5a2 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 23 05:12:54 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 23 05:12:54 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 23 05:12:54 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:12:54 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:12:54 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 23 05:12:54 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:12:55 localhost nova_compute[280939]: 2025-11-23 10:12:55.586 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:55 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v715: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:57 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v716: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:12:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:12:58 localhost ceph-mgr[286671]: [progress INFO root] Writing back 50 completed events Nov 23 05:12:58 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 23 05:12:58 localhost ceph-mon[293353]: log_channel(audit) log [INF] : from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:12:58 localhost ceph-mon[293353]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' Nov 23 05:12:59 localhost nova_compute[280939]: 2025-11-23 10:12:59.266 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:12:59 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v717: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:13:00 localhost nova_compute[280939]: 2025-11-23 10:13:00.632 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:13:01 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v718: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:13:03 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:13:03 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v719: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:13:04 localhost nova_compute[280939]: 2025-11-23 10:13:04.304 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:13:05 localhost nova_compute[280939]: 2025-11-23 10:13:05.664 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:13:05 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v720: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:13:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0. Nov 23 05:13:05 localhost podman[328816]: 2025-11-23 10:13:05.906513407 +0000 UTC m=+0.085120833 container health_status 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible) Nov 23 05:13:05 localhost podman[328816]: 2025-11-23 10:13:05.942329999 +0000 UTC m=+0.120937435 container exec_died 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76) Nov 23 05:13:05 localhost systemd[1]: 219e8a4171c5993e6654e4eed9a090e3f40022ea4bc68dd80dd999fd777a68a0.service: Deactivated successfully. Nov 23 05:13:06 localhost openstack_network_exporter[241732]: ERROR 10:13:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:13:06 localhost openstack_network_exporter[241732]: ERROR 10:13:06 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 23 05:13:06 localhost openstack_network_exporter[241732]: ERROR 10:13:06 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 23 05:13:06 localhost openstack_network_exporter[241732]: ERROR 10:13:06 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 23 05:13:06 localhost openstack_network_exporter[241732]: Nov 23 05:13:06 localhost openstack_network_exporter[241732]: ERROR 10:13:06 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 23 05:13:06 localhost openstack_network_exporter[241732]: Nov 23 05:13:07 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v721: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:13:08 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:13:09 localhost nova_compute[280939]: 2025-11-23 10:13:09.335 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:13:09 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v722: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:13:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:13:09.753 159415 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:13:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:13:09.753 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:13:09 localhost ovn_metadata_agent[159410]: 2025-11-23 10:13:09.754 159415 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:13:09 localhost nova_compute[280939]: 2025-11-23 10:13:09.835 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:13:10 localhost nova_compute[280939]: 2025-11-23 10:13:10.688 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:13:11 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v723: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.581 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.582 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.583 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.584 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:12 localhost ceilometer_agent_compute[237112]: 2025-11-23 10:13:12.585 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 23 05:13:13 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:13:13 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:13:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e. Nov 23 05:13:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58. Nov 23 05:13:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291. Nov 23 05:13:13 localhost podman[328838]: 2025-11-23 10:13:13.914312907 +0000 UTC m=+0.096678148 container health_status 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:13:13 localhost podman[328837]: 2025-11-23 10:13:13.951040278 +0000 UTC m=+0.136707914 container health_status 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 23 05:13:13 localhost podman[328837]: 2025-11-23 10:13:13.958613428 +0000 UTC m=+0.144281064 container exec_died 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 23 05:13:13 localhost systemd[1]: 8f44a4d4f6fe60c0f88a3935bff6afa63e7271eb43d2bf39b51509bc67334f58.service: Deactivated successfully. Nov 23 05:13:14 localhost podman[328836]: 2025-11-23 10:13:14.006698068 +0000 UTC m=+0.195780472 container health_status 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, config_id=multipathd) Nov 23 05:13:14 localhost podman[328838]: 2025-11-23 10:13:14.029732576 +0000 UTC m=+0.212097827 container exec_died 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Nov 23 05:13:14 localhost systemd[1]: 900d43fca226dfce571c223c6f1bced0bb7650112b6a1ff516fa88c2e329c291.service: Deactivated successfully. Nov 23 05:13:14 localhost podman[328836]: 2025-11-23 10:13:14.044269617 +0000 UTC m=+0.233352061 container exec_died 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 23 05:13:14 localhost systemd[1]: 7acdec6d5c013e63da1e28d1f9671fbe27a4375640d467c5f132f6967984773e.service: Deactivated successfully. Nov 23 05:13:14 localhost nova_compute[280939]: 2025-11-23 10:13:14.358 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:13:14 localhost sshd[328904]: main: sshd: ssh-rsa algorithm is disabled Nov 23 05:13:14 localhost systemd-logind[760]: New session 74 of user zuul. Nov 23 05:13:14 localhost systemd[1]: Started Session 74 of User zuul. Nov 23 05:13:14 localhost ovn_controller[153771]: 2025-11-23T10:13:14Z|00391|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Nov 23 05:13:14 localhost python3[328926]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-fc5a-8bfb-00000000000c-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 23 05:13:15 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v725: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:13:15 localhost nova_compute[280939]: 2025-11-23 10:13:15.712 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:13:17 localhost podman[239764]: time="2025-11-23T10:13:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 23 05:13:17 localhost podman[239764]: @ - - [23/Nov/2025:10:13:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154499 "" "Go-http-client/1.1" Nov 23 05:13:17 localhost nova_compute[280939]: 2025-11-23 10:13:17.135 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:13:17 localhost podman[239764]: @ - - [23/Nov/2025:10:13:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18781 "" "Go-http-client/1.1" Nov 23 05:13:17 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v726: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:13:18 localhost nova_compute[280939]: 2025-11-23 10:13:18.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:13:18 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:13:19 localhost nova_compute[280939]: 2025-11-23 10:13:19.381 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:13:19 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v727: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:13:20 localhost systemd[1]: session-74.scope: Deactivated successfully. Nov 23 05:13:20 localhost systemd-logind[760]: Session 74 logged out. Waiting for processes to exit. Nov 23 05:13:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9. Nov 23 05:13:20 localhost systemd-logind[760]: Removed session 74. Nov 23 05:13:20 localhost nova_compute[280939]: 2025-11-23 10:13:20.751 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:13:20 localhost podman[328929]: 2025-11-23 10:13:20.815908645 +0000 UTC m=+0.108375248 container health_status 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc.) Nov 23 05:13:20 localhost podman[328929]: 2025-11-23 10:13:20.833332986 +0000 UTC m=+0.125799569 container exec_died 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Nov 23 05:13:20 localhost systemd[1]: 0e8658fef21a796462a158f7bc536a3db067dcc30d0389c08d9bf770cde75cb9.service: Deactivated successfully. Nov 23 05:13:21 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v728: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:13:22 localhost nova_compute[280939]: 2025-11-23 10:13:22.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:13:22 localhost nova_compute[280939]: 2025-11-23 10:13:22.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 23 05:13:22 localhost nova_compute[280939]: 2025-11-23 10:13:22.133 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 23 05:13:22 localhost nova_compute[280939]: 2025-11-23 10:13:22.150 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 23 05:13:22 localhost nova_compute[280939]: 2025-11-23 10:13:22.150 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:13:22 localhost nova_compute[280939]: 2025-11-23 10:13:22.151 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:13:22 localhost nova_compute[280939]: 2025-11-23 10:13:22.151 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Nov 23 05:13:22 localhost nova_compute[280939]: 2025-11-23 10:13:22.169 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Nov 23 05:13:23 localhost ceph-mgr[286671]: [balancer INFO root] Optimize plan auto_2025-11-23_10:13:23 Nov 23 05:13:23 localhost ceph-mgr[286671]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 23 05:13:23 localhost ceph-mgr[286671]: [balancer INFO root] do_upmap Nov 23 05:13:23 localhost ceph-mgr[286671]: [balancer INFO root] pools ['images', 'manila_metadata', 'backups', 'volumes', 'vms', 'manila_data', '.mgr'] Nov 23 05:13:23 localhost ceph-mgr[286671]: [balancer INFO root] prepared 0/10 changes Nov 23 05:13:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:13:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Nov 23 05:13:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 23 05:13:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:13:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Nov 23 05:13:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 23 05:13:23 localhost ceph-mgr[286671]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 23 05:13:23 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:13:23 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v729: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Nov 23 05:13:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] _maybe_adjust Nov 23 05:13:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:13:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 23 05:13:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:13:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Nov 23 05:13:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:13:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Nov 23 05:13:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:13:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 23 05:13:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:13:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 23 05:13:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:13:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 23 05:13:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 23 05:13:23 localhost ceph-mgr[286671]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0026104371684812955 of space, bias 4.0, pg target 2.077907986111111 quantized to 16 (current 16) Nov 23 05:13:23 localhost ceph-mgr[286671]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 23 05:13:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:13:23 localhost ceph-mgr[286671]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 23 05:13:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 23 05:13:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:13:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 23 05:13:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:13:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: images, start_after= Nov 23 05:13:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:13:23 localhost ceph-mgr[286671]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 23 05:13:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05. Nov 23 05:13:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8. Nov 23 05:13:23 localhost nova_compute[280939]: 2025-11-23 10:13:23.836 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:13:23 localhost podman[328949]: 2025-11-23 10:13:23.895846056 +0000 UTC m=+0.083094018 container health_status 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 23 05:13:23 localhost podman[328949]: 2025-11-23 10:13:23.910423157 +0000 UTC m=+0.097671079 container exec_died 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true) Nov 23 05:13:23 localhost systemd[1]: 2103ec2b102d3def6336789985a3f1ea498aa93e2fad84fd31ba866842015f05.service: Deactivated successfully. Nov 23 05:13:24 localhost podman[328950]: 2025-11-23 10:13:24.000360652 +0000 UTC m=+0.183880746 container health_status a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 23 05:13:24 localhost podman[328950]: 2025-11-23 10:13:24.009405457 +0000 UTC m=+0.192925571 container exec_died a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 23 05:13:24 localhost systemd[1]: a77ab826549f600668db9761a9a48266412b464e26e6a1db1937219e5b1184f8.service: Deactivated successfully. Nov 23 05:13:24 localhost nova_compute[280939]: 2025-11-23 10:13:24.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:13:24 localhost ceph-mgr[286671]: [volumes INFO mgr_util] scanning for idle connections.. Nov 23 05:13:24 localhost ceph-mgr[286671]: [volumes INFO mgr_util] cleaning up connections: [] Nov 23 05:13:24 localhost nova_compute[280939]: 2025-11-23 10:13:24.409 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:13:24 localhost ceph-mon[293353]: log_channel(cluster) log [DBG] : mgrmap e54: np0005532584.naxwxy(active, since 18m), standbys: np0005532585.gzafiw, np0005532586.thmvqb Nov 23 05:13:25 localhost nova_compute[280939]: 2025-11-23 10:13:25.144 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:13:25 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v730: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Nov 23 05:13:25 localhost nova_compute[280939]: 2025-11-23 10:13:25.776 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:13:27 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v731: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Nov 23 05:13:28 localhost nova_compute[280939]: 2025-11-23 10:13:28.128 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:13:28 localhost ceph-mon[293353]: mon.np0005532584@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 23 05:13:29 localhost nova_compute[280939]: 2025-11-23 10:13:29.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:13:29 localhost nova_compute[280939]: 2025-11-23 10:13:29.132 280943 DEBUG nova.compute.manager [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 23 05:13:29 localhost nova_compute[280939]: 2025-11-23 10:13:29.444 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:13:29 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v732: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Nov 23 05:13:30 localhost nova_compute[280939]: 2025-11-23 10:13:30.133 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:13:30 localhost nova_compute[280939]: 2025-11-23 10:13:30.823 280943 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 23 05:13:31 localhost nova_compute[280939]: 2025-11-23 10:13:31.132 280943 DEBUG oslo_service.periodic_task [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 23 05:13:31 localhost nova_compute[280939]: 2025-11-23 10:13:31.152 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:13:31 localhost nova_compute[280939]: 2025-11-23 10:13:31.152 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:13:31 localhost nova_compute[280939]: 2025-11-23 10:13:31.153 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:13:31 localhost nova_compute[280939]: 2025-11-23 10:13:31.153 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Auditing locally available compute resources for np0005532584.localdomain (node: np0005532584.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 23 05:13:31 localhost nova_compute[280939]: 2025-11-23 10:13:31.154 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:13:31 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:13:31 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2356149243' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:13:31 localhost nova_compute[280939]: 2025-11-23 10:13:31.603 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:13:31 localhost ceph-mgr[286671]: log_channel(cluster) log [DBG] : pgmap v733: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Nov 23 05:13:31 localhost nova_compute[280939]: 2025-11-23 10:13:31.784 280943 WARNING nova.virt.libvirt.driver [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 23 05:13:31 localhost nova_compute[280939]: 2025-11-23 10:13:31.786 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Hypervisor/Node resource view: name=np0005532584.localdomain free_ram=11384MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 23 05:13:31 localhost nova_compute[280939]: 2025-11-23 10:13:31.786 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 23 05:13:31 localhost nova_compute[280939]: 2025-11-23 10:13:31.787 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 23 05:13:31 localhost nova_compute[280939]: 2025-11-23 10:13:31.858 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 23 05:13:31 localhost nova_compute[280939]: 2025-11-23 10:13:31.858 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Final resource view: name=np0005532584.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 23 05:13:31 localhost nova_compute[280939]: 2025-11-23 10:13:31.975 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 23 05:13:32 localhost ceph-mon[293353]: mon.np0005532584@0(leader) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 23 05:13:32 localhost ceph-mon[293353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1143693070' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 23 05:13:32 localhost nova_compute[280939]: 2025-11-23 10:13:32.375 280943 DEBUG oslo_concurrency.processutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 23 05:13:32 localhost nova_compute[280939]: 2025-11-23 10:13:32.381 280943 DEBUG nova.compute.provider_tree [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed in ProviderTree for provider: c90c5769-42ab-40e9-92fc-3d82b4e96052 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 23 05:13:32 localhost nova_compute[280939]: 2025-11-23 10:13:32.398 280943 DEBUG nova.scheduler.client.report [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Inventory has not changed for provider c90c5769-42ab-40e9-92fc-3d82b4e96052 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 23 05:13:32 localhost nova_compute[280939]: 2025-11-23 10:13:32.400 280943 DEBUG nova.compute.resource_tracker [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Compute_service record updated for np0005532584.localdomain:np0005532584.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 23 05:13:32 localhost nova_compute[280939]: 2025-11-23 10:13:32.401 280943 DEBUG oslo_concurrency.lockutils [None req-ee7f39cf-a207-4603-8e01-04fd92afed83 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 23 05:13:33 localhost sshd[329037]: main: sshd: ssh-rsa algorithm is disabled Nov 23 05:13:33 localhost systemd-logind[760]: New session 75 of user zuul. Nov 23 05:13:33 localhost systemd[1]: Started Session 75 of User zuul.